Mar 11 02:24:08.510589 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 10 23:35:49 -00 2026 Mar 11 02:24:08.510617 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:24:08.510631 kernel: BIOS-provided physical RAM map: Mar 11 02:24:08.510639 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 11 02:24:08.510648 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 11 02:24:08.510658 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 11 02:24:08.510668 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 11 02:24:08.510676 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 11 02:24:08.510684 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 11 02:24:08.510697 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 11 02:24:08.510705 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 11 02:24:08.510715 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 11 02:24:08.510724 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 11 02:24:08.510732 kernel: NX (Execute Disable) protection: active Mar 11 02:24:08.510742 kernel: APIC: Static calls initialized Mar 11 02:24:08.510754 kernel: SMBIOS 2.8 present. Mar 11 02:24:08.510763 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 11 02:24:08.510774 kernel: Hypervisor detected: KVM Mar 11 02:24:08.510783 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 11 02:24:08.510791 kernel: kvm-clock: using sched offset of 7729071538 cycles Mar 11 02:24:08.510800 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 11 02:24:08.510809 kernel: tsc: Detected 2445.426 MHz processor Mar 11 02:24:08.510819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 11 02:24:08.510830 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 11 02:24:08.510846 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 11 02:24:08.510855 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 11 02:24:08.510863 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 11 02:24:08.510872 kernel: Using GB pages for direct mapping Mar 11 02:24:08.510880 kernel: ACPI: Early table checksum verification disabled Mar 11 02:24:08.510890 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 11 02:24:08.510901 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.510910 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.510919 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.510931 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 11 02:24:08.510939 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.511093 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.511106 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.511116 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:24:08.511128 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 11 02:24:08.511137 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 11 02:24:08.511152 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 11 02:24:08.511165 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 11 02:24:08.511174 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 11 02:24:08.511183 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 11 02:24:08.511195 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 11 02:24:08.511206 kernel: No NUMA configuration found Mar 11 02:24:08.511218 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 11 02:24:08.511231 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 11 02:24:08.511241 kernel: Zone ranges: Mar 11 02:24:08.511250 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 11 02:24:08.511258 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 11 02:24:08.511352 kernel: Normal empty Mar 11 02:24:08.511365 kernel: Movable zone start for each node Mar 11 02:24:08.511376 kernel: Early memory node ranges Mar 11 02:24:08.511387 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 11 02:24:08.511396 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 11 02:24:08.511405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 11 02:24:08.511419 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:24:08.511427 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 11 02:24:08.511436 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 11 02:24:08.511448 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 11 02:24:08.511459 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 11 02:24:08.511470 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 11 02:24:08.511481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 11 02:24:08.511490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 11 02:24:08.511499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 11 02:24:08.511512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 11 02:24:08.511521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 11 02:24:08.511533 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 11 02:24:08.511544 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 11 02:24:08.511555 kernel: TSC deadline timer available Mar 11 02:24:08.511565 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 11 02:24:08.511574 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 11 02:24:08.511583 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 11 02:24:08.511592 kernel: kvm-guest: setup PV sched yield Mar 11 02:24:08.511605 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 11 02:24:08.511617 kernel: Booting paravirtualized kernel on KVM Mar 11 02:24:08.511628 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 11 02:24:08.511640 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 11 02:24:08.511650 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 11 02:24:08.511659 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 11 02:24:08.511668 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 11 02:24:08.511677 kernel: kvm-guest: PV spinlocks enabled Mar 11 02:24:08.511686 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 11 02:24:08.511704 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:24:08.511715 kernel: random: crng init done Mar 11 02:24:08.511727 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 02:24:08.511736 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 02:24:08.511745 kernel: Fallback order for Node 0: 0 Mar 11 02:24:08.511753 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 11 02:24:08.511762 kernel: Policy zone: DMA32 Mar 11 02:24:08.511772 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 02:24:08.511788 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 11 02:24:08.511799 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 11 02:24:08.511810 kernel: ftrace: allocating 37996 entries in 149 pages Mar 11 02:24:08.511820 kernel: ftrace: allocated 149 pages with 4 groups Mar 11 02:24:08.511828 kernel: Dynamic Preempt: voluntary Mar 11 02:24:08.511837 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 02:24:08.511847 kernel: rcu: RCU event tracing is enabled. Mar 11 02:24:08.511858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 11 02:24:08.511869 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 02:24:08.511886 kernel: Rude variant of Tasks RCU enabled. Mar 11 02:24:08.511895 kernel: Tracing variant of Tasks RCU enabled. Mar 11 02:24:08.511904 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 02:24:08.511913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 11 02:24:08.511922 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 11 02:24:08.511931 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 02:24:08.511942 kernel: Console: colour VGA+ 80x25 Mar 11 02:24:08.512104 kernel: printk: console [ttyS0] enabled Mar 11 02:24:08.512118 kernel: ACPI: Core revision 20230628 Mar 11 02:24:08.512133 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 11 02:24:08.512145 kernel: APIC: Switch to symmetric I/O mode setup Mar 11 02:24:08.512154 kernel: x2apic enabled Mar 11 02:24:08.512163 kernel: APIC: Switched APIC routing to: physical x2apic Mar 11 02:24:08.512172 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 11 02:24:08.512181 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 11 02:24:08.512190 kernel: kvm-guest: setup PV IPIs Mar 11 02:24:08.512202 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 11 02:24:08.512229 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 11 02:24:08.512239 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 11 02:24:08.512249 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 11 02:24:08.512258 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 11 02:24:08.512357 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 11 02:24:08.512370 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 11 02:24:08.512382 kernel: Spectre V2 : Mitigation: Retpolines Mar 11 02:24:08.512393 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 11 02:24:08.512402 kernel: Speculative Store Bypass: Vulnerable Mar 11 02:24:08.512416 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 11 02:24:08.512426 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 11 02:24:08.512436 kernel: active return thunk: srso_alias_return_thunk Mar 11 02:24:08.512448 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 11 02:24:08.512459 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 11 02:24:08.512472 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 11 02:24:08.512483 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 11 02:24:08.512492 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 11 02:24:08.512505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 11 02:24:08.512515 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 11 02:24:08.512525 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 11 02:24:08.512542 kernel: Freeing SMP alternatives memory: 32K Mar 11 02:24:08.512554 kernel: pid_max: default: 32768 minimum: 301 Mar 11 02:24:08.512566 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 02:24:08.512576 kernel: landlock: Up and running. Mar 11 02:24:08.512585 kernel: SELinux: Initializing. Mar 11 02:24:08.512594 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:24:08.512608 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:24:08.512620 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 11 02:24:08.512632 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:24:08.512642 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:24:08.512652 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:24:08.512662 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 11 02:24:08.512671 kernel: signal: max sigframe size: 1776 Mar 11 02:24:08.512683 kernel: rcu: Hierarchical SRCU implementation. Mar 11 02:24:08.512696 kernel: rcu: Max phase no-delay instances is 400. Mar 11 02:24:08.512711 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 11 02:24:08.512720 kernel: smp: Bringing up secondary CPUs ... Mar 11 02:24:08.512730 kernel: smpboot: x86: Booting SMP configuration: Mar 11 02:24:08.512739 kernel: .... node #0, CPUs: #1 #2 #3 Mar 11 02:24:08.512751 kernel: smp: Brought up 1 node, 4 CPUs Mar 11 02:24:08.512763 kernel: smpboot: Max logical packages: 1 Mar 11 02:24:08.512772 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 11 02:24:08.512781 kernel: devtmpfs: initialized Mar 11 02:24:08.512791 kernel: x86/mm: Memory block size: 128MB Mar 11 02:24:08.512805 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 02:24:08.512818 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 11 02:24:08.512828 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 02:24:08.512837 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 02:24:08.512847 kernel: audit: initializing netlink subsys (disabled) Mar 11 02:24:08.512856 kernel: audit: type=2000 audit(1773195843.379:1): state=initialized audit_enabled=0 res=1 Mar 11 02:24:08.512868 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 02:24:08.512880 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 11 02:24:08.512892 kernel: cpuidle: using governor menu Mar 11 02:24:08.512907 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 02:24:08.512916 kernel: dca service started, version 1.12.1 Mar 11 02:24:08.512926 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 11 02:24:08.512935 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 11 02:24:08.512946 kernel: PCI: Using configuration type 1 for base access Mar 11 02:24:08.513112 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 11 02:24:08.513124 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 02:24:08.513136 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 02:24:08.513147 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 02:24:08.513161 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 02:24:08.513171 kernel: ACPI: Added _OSI(Module Device) Mar 11 02:24:08.513180 kernel: ACPI: Added _OSI(Processor Device) Mar 11 02:24:08.513189 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 02:24:08.513202 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 02:24:08.513213 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 11 02:24:08.513224 kernel: ACPI: Interpreter enabled Mar 11 02:24:08.513235 kernel: ACPI: PM: (supports S0 S3 S5) Mar 11 02:24:08.513245 kernel: ACPI: Using IOAPIC for interrupt routing Mar 11 02:24:08.513258 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 11 02:24:08.513357 kernel: PCI: Using E820 reservations for host bridge windows Mar 11 02:24:08.513370 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 11 02:24:08.513381 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 11 02:24:08.513615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 11 02:24:08.513809 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 11 02:24:08.514146 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 11 02:24:08.514167 kernel: PCI host bridge to bus 0000:00 Mar 11 02:24:08.514443 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 11 02:24:08.514607 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 11 02:24:08.514766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 11 02:24:08.514933 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 11 02:24:08.515256 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 11 02:24:08.515521 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 11 02:24:08.515694 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 11 02:24:08.515898 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 11 02:24:08.516248 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 11 02:24:08.516519 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 11 02:24:08.516695 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 11 02:24:08.516875 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 11 02:24:08.517358 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 11 02:24:08.517546 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 12695 usecs Mar 11 02:24:08.517741 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 11 02:24:08.517920 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 11 02:24:08.518247 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 11 02:24:08.518513 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 11 02:24:08.518707 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 11 02:24:08.518893 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 11 02:24:08.519398 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 11 02:24:08.519577 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 11 02:24:08.519771 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 11 02:24:08.520150 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 11 02:24:08.520423 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 11 02:24:08.520604 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 11 02:24:08.520789 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 11 02:24:08.521135 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 11 02:24:08.521411 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 11 02:24:08.521585 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 12695 usecs Mar 11 02:24:08.521765 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 11 02:24:08.521937 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 11 02:24:08.522356 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 11 02:24:08.522568 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 11 02:24:08.522748 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 11 02:24:08.522765 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 11 02:24:08.522777 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 11 02:24:08.522788 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 11 02:24:08.522799 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 11 02:24:08.522811 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 11 02:24:08.522822 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 11 02:24:08.522837 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 11 02:24:08.522848 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 11 02:24:08.522859 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 11 02:24:08.522870 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 11 02:24:08.522881 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 11 02:24:08.522893 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 11 02:24:08.522904 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 11 02:24:08.522916 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 11 02:24:08.522928 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 11 02:24:08.522943 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 11 02:24:08.523130 kernel: iommu: Default domain type: Translated Mar 11 02:24:08.523141 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 11 02:24:08.523150 kernel: PCI: Using ACPI for IRQ routing Mar 11 02:24:08.523159 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 11 02:24:08.523170 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 11 02:24:08.523182 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 11 02:24:08.523458 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 11 02:24:08.523634 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 11 02:24:08.523821 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 11 02:24:08.523838 kernel: vgaarb: loaded Mar 11 02:24:08.523850 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 11 02:24:08.523862 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 11 02:24:08.523873 kernel: clocksource: Switched to clocksource kvm-clock Mar 11 02:24:08.523884 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 02:24:08.523896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 02:24:08.523907 kernel: pnp: PnP ACPI init Mar 11 02:24:08.524400 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 11 02:24:08.524424 kernel: pnp: PnP ACPI: found 6 devices Mar 11 02:24:08.524435 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 11 02:24:08.524446 kernel: NET: Registered PF_INET protocol family Mar 11 02:24:08.524456 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 02:24:08.524467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 02:24:08.524477 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 02:24:08.524487 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 02:24:08.524497 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 02:24:08.524511 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 02:24:08.524521 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:24:08.524531 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:24:08.524542 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 02:24:08.524552 kernel: NET: Registered PF_XDP protocol family Mar 11 02:24:08.524705 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 11 02:24:08.524852 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 11 02:24:08.525150 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 11 02:24:08.525401 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 11 02:24:08.525545 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 11 02:24:08.525685 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 11 02:24:08.525701 kernel: PCI: CLS 0 bytes, default 64 Mar 11 02:24:08.525712 kernel: Initialise system trusted keyrings Mar 11 02:24:08.525721 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 02:24:08.525730 kernel: Key type asymmetric registered Mar 11 02:24:08.525740 kernel: Asymmetric key parser 'x509' registered Mar 11 02:24:08.525750 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 11 02:24:08.525765 kernel: io scheduler mq-deadline registered Mar 11 02:24:08.525775 kernel: io scheduler kyber registered Mar 11 02:24:08.525785 kernel: io scheduler bfq registered Mar 11 02:24:08.525796 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 11 02:24:08.525806 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 11 02:24:08.525817 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 11 02:24:08.525827 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 11 02:24:08.525837 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 02:24:08.525848 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 11 02:24:08.525861 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 11 02:24:08.525871 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 11 02:24:08.525882 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 11 02:24:08.526199 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 11 02:24:08.526216 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 11 02:24:08.526451 kernel: rtc_cmos 00:04: registered as rtc0 Mar 11 02:24:08.526571 kernel: rtc_cmos 00:04: setting system clock to 2026-03-11T02:24:07 UTC (1773195847) Mar 11 02:24:08.526686 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 11 02:24:08.526700 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 11 02:24:08.526707 kernel: NET: Registered PF_INET6 protocol family Mar 11 02:24:08.526713 kernel: Segment Routing with IPv6 Mar 11 02:24:08.526720 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 02:24:08.526727 kernel: NET: Registered PF_PACKET protocol family Mar 11 02:24:08.526734 kernel: Key type dns_resolver registered Mar 11 02:24:08.526740 kernel: IPI shorthand broadcast: enabled Mar 11 02:24:08.526747 kernel: sched_clock: Marking stable (3194054754, 911134098)->(4711698644, -606509792) Mar 11 02:24:08.526827 kernel: registered taskstats version 1 Mar 11 02:24:08.526838 kernel: Loading compiled-in X.509 certificates Mar 11 02:24:08.526845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6607fbe6d184c26ff6db73f5ff7c44b69c5a8579' Mar 11 02:24:08.526851 kernel: Key type .fscrypt registered Mar 11 02:24:08.526858 kernel: Key type fscrypt-provisioning registered Mar 11 02:24:08.526864 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 02:24:08.526871 kernel: ima: Allocated hash algorithm: sha1 Mar 11 02:24:08.526877 kernel: ima: No architecture policies found Mar 11 02:24:08.526884 kernel: clk: Disabling unused clocks Mar 11 02:24:08.526890 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 11 02:24:08.526900 kernel: Write protecting the kernel read-only data: 36864k Mar 11 02:24:08.526906 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 11 02:24:08.526913 kernel: Run /init as init process Mar 11 02:24:08.526919 kernel: with arguments: Mar 11 02:24:08.526926 kernel: /init Mar 11 02:24:08.526932 kernel: with environment: Mar 11 02:24:08.526939 kernel: HOME=/ Mar 11 02:24:08.526945 kernel: TERM=linux Mar 11 02:24:08.527080 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:24:08.527094 systemd[1]: Detected virtualization kvm. Mar 11 02:24:08.527101 systemd[1]: Detected architecture x86-64. Mar 11 02:24:08.527108 systemd[1]: Running in initrd. Mar 11 02:24:08.527115 systemd[1]: No hostname configured, using default hostname. Mar 11 02:24:08.527121 systemd[1]: Hostname set to . Mar 11 02:24:08.527128 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:24:08.527135 systemd[1]: Queued start job for default target initrd.target. Mar 11 02:24:08.527144 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:24:08.527152 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:24:08.527159 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 02:24:08.527166 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:24:08.527173 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 02:24:08.527180 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 02:24:08.527189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 02:24:08.527199 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 02:24:08.527206 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:24:08.527213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:24:08.527220 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:24:08.527238 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:24:08.527248 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:24:08.527257 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:24:08.527264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:24:08.527343 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:24:08.527350 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 02:24:08.527357 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 02:24:08.527365 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:24:08.527372 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:24:08.527379 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:24:08.527389 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:24:08.527396 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 02:24:08.527406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:24:08.527413 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 02:24:08.527420 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 02:24:08.527427 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:24:08.527434 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:24:08.527441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:24:08.527469 systemd-journald[194]: Collecting audit messages is disabled. Mar 11 02:24:08.527489 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 02:24:08.527496 systemd-journald[194]: Journal started Mar 11 02:24:08.527513 systemd-journald[194]: Runtime Journal (/run/log/journal/248eedcd25bf4c6b82780552f30e0d6e) is 6.0M, max 48.4M, 42.3M free. Mar 11 02:24:08.542059 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:24:08.549720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:24:08.559454 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 02:24:08.579559 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:24:08.579594 systemd-modules-load[195]: Inserted module 'overlay' Mar 11 02:24:09.066748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 02:24:09.066783 kernel: Bridge firewalling registered Mar 11 02:24:08.585211 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:24:08.637081 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 11 02:24:09.067408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:24:09.099385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:24:09.146577 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:24:09.164464 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:24:09.165191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:24:09.181765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:24:09.202884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:24:09.258728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:24:09.271875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:24:09.290696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:24:09.323622 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 02:24:09.340616 dracut-cmdline[229]: dracut-dracut-053 Mar 11 02:24:09.346678 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:24:09.389362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:24:09.443714 systemd-resolved[270]: Positive Trust Anchors: Mar 11 02:24:09.443784 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:24:09.443810 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:24:09.446808 systemd-resolved[270]: Defaulting to hostname 'linux'. Mar 11 02:24:09.448522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:24:09.469160 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:24:09.561152 kernel: SCSI subsystem initialized Mar 11 02:24:09.580250 kernel: Loading iSCSI transport class v2.0-870. Mar 11 02:24:09.603178 kernel: iscsi: registered transport (tcp) Mar 11 02:24:09.638580 kernel: iscsi: registered transport (qla4xxx) Mar 11 02:24:09.638743 kernel: QLogic iSCSI HBA Driver Mar 11 02:24:09.712718 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 02:24:09.734590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 02:24:09.807089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 02:24:09.807165 kernel: device-mapper: uevent: version 1.0.3 Mar 11 02:24:09.814699 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 02:24:09.875234 kernel: raid6: avx2x4 gen() 28382 MB/s Mar 11 02:24:09.896235 kernel: raid6: avx2x2 gen() 29900 MB/s Mar 11 02:24:09.920808 kernel: raid6: avx2x1 gen() 20998 MB/s Mar 11 02:24:09.920862 kernel: raid6: using algorithm avx2x2 gen() 29900 MB/s Mar 11 02:24:09.943263 kernel: raid6: .... xor() 23927 MB/s, rmw enabled Mar 11 02:24:09.943406 kernel: raid6: using avx2x2 recovery algorithm Mar 11 02:24:09.980262 kernel: xor: automatically using best checksumming function avx Mar 11 02:24:10.247436 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 02:24:10.269490 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:24:10.298389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:24:10.317383 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 11 02:24:10.323617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:24:10.335862 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 02:24:10.363366 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 11 02:24:10.416932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:24:10.441683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:24:10.527900 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:24:10.554572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 02:24:10.587645 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 02:24:10.604385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:24:10.615472 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:24:10.624191 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:24:10.664094 kernel: cryptd: max_cpu_qlen set to 1000 Mar 11 02:24:10.664129 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 11 02:24:10.668646 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 02:24:10.672745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:24:10.732393 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 11 02:24:10.732576 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 11 02:24:10.732588 kernel: GPT:9289727 != 19775487 Mar 11 02:24:10.732598 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 11 02:24:10.732608 kernel: GPT:9289727 != 19775487 Mar 11 02:24:10.732617 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 11 02:24:10.732627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:24:10.672842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:24:10.737222 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:24:10.742364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:24:10.742576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:24:10.762849 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:24:10.777600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:24:10.791388 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:24:10.880164 kernel: libata version 3.00 loaded. Mar 11 02:24:10.924364 kernel: ahci 0000:00:1f.2: version 3.0 Mar 11 02:24:10.924585 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 11 02:24:10.932251 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Mar 11 02:24:10.932283 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 11 02:24:10.932525 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 11 02:24:10.940137 kernel: BTRFS: device fsid 1c1071f5-2e45-4924-9ec8-a67042aa7fbc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (464) Mar 11 02:24:10.940166 kernel: AVX2 version of gcm_enc/dec engaged. Mar 11 02:24:10.942374 kernel: AES CTR mode by8 optimization enabled Mar 11 02:24:10.945198 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 11 02:24:11.539382 kernel: scsi host0: ahci Mar 11 02:24:11.539593 kernel: scsi host1: ahci Mar 11 02:24:11.539766 kernel: scsi host2: ahci Mar 11 02:24:11.539942 kernel: scsi host3: ahci Mar 11 02:24:11.540221 kernel: scsi host4: ahci Mar 11 02:24:11.540439 kernel: scsi host5: ahci Mar 11 02:24:11.540593 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 11 02:24:11.540605 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 11 02:24:11.540620 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 11 02:24:11.540630 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 11 02:24:11.540639 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 11 02:24:11.540649 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 11 02:24:11.540664 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 11 02:24:11.540680 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 11 02:24:11.540696 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 11 02:24:11.540713 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 11 02:24:11.540735 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 11 02:24:11.540752 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 11 02:24:11.540768 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 11 02:24:11.540779 kernel: ata3.00: applying bridge limits Mar 11 02:24:11.540788 kernel: ata3.00: configured for UDMA/100 Mar 11 02:24:11.540798 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 11 02:24:11.541097 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 11 02:24:11.541255 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 02:24:11.541266 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 11 02:24:11.547795 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 11 02:24:11.572688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:24:11.583453 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 11 02:24:11.588517 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 11 02:24:11.617487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:24:11.651603 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 02:24:11.662131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:24:11.687561 disk-uuid[567]: Primary Header is updated. Mar 11 02:24:11.687561 disk-uuid[567]: Secondary Entries is updated. Mar 11 02:24:11.687561 disk-uuid[567]: Secondary Header is updated. Mar 11 02:24:11.696184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:24:11.727376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:24:12.729901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:24:12.735138 disk-uuid[570]: The operation has completed successfully. Mar 11 02:24:12.846507 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 02:24:12.846687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 02:24:12.892502 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 02:24:13.035765 sh[592]: Success Mar 11 02:24:13.130598 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 11 02:24:13.220473 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 02:24:13.239595 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 02:24:13.246680 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 02:24:13.312802 kernel: BTRFS info (device dm-0): first mount of filesystem 1c1071f5-2e45-4924-9ec8-a67042aa7fbc Mar 11 02:24:13.312867 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:24:13.312886 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 02:24:13.322212 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 02:24:13.329285 kernel: BTRFS info (device dm-0): using free space tree Mar 11 02:24:13.365189 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 02:24:13.379117 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 02:24:13.405655 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 02:24:13.414946 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 02:24:13.463909 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:24:13.464096 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:24:13.471230 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:24:13.489148 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:24:13.517282 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 02:24:13.537657 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:24:13.544808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 02:24:13.570520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 02:24:13.700106 ignition[692]: Ignition 2.19.0 Mar 11 02:24:13.700198 ignition[692]: Stage: fetch-offline Mar 11 02:24:13.700261 ignition[692]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:24:13.700277 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:24:13.700485 ignition[692]: parsed url from cmdline: "" Mar 11 02:24:13.700491 ignition[692]: no config URL provided Mar 11 02:24:13.700499 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 02:24:13.700513 ignition[692]: no config at "/usr/lib/ignition/user.ign" Mar 11 02:24:13.700548 ignition[692]: op(1): [started] loading QEMU firmware config module Mar 11 02:24:13.700556 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 11 02:24:13.715744 ignition[692]: op(1): [finished] loading QEMU firmware config module Mar 11 02:24:13.715770 ignition[692]: QEMU firmware config was not found. Ignoring... Mar 11 02:24:13.853943 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:24:13.894483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:24:13.973612 systemd-networkd[780]: lo: Link UP Mar 11 02:24:13.973693 systemd-networkd[780]: lo: Gained carrier Mar 11 02:24:13.991842 systemd-networkd[780]: Enumeration completed Mar 11 02:24:14.000449 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:24:14.026171 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:24:14.026254 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:24:14.029104 systemd-networkd[780]: eth0: Link UP Mar 11 02:24:14.029109 systemd-networkd[780]: eth0: Gained carrier Mar 11 02:24:14.029120 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:24:14.064410 systemd[1]: Reached target network.target - Network. Mar 11 02:24:14.111161 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:24:14.536933 ignition[692]: parsing config with SHA512: 765a5b18fa4b4fbd573b42521378772172bb4e5f588d8348fe0b2e07f514dbcef8ddac894e9dda176c618b502e3702cf3d18986c53458f72c35f782233e4493f Mar 11 02:24:14.546585 unknown[692]: fetched base config from "system" Mar 11 02:24:14.546603 unknown[692]: fetched user config from "qemu" Mar 11 02:24:14.547563 ignition[692]: fetch-offline: fetch-offline passed Mar 11 02:24:14.547624 ignition[692]: Ignition finished successfully Mar 11 02:24:14.572202 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:24:14.588835 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 11 02:24:14.613427 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 02:24:14.657449 ignition[784]: Ignition 2.19.0 Mar 11 02:24:14.657538 ignition[784]: Stage: kargs Mar 11 02:24:14.657918 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:24:14.663779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 02:24:14.657935 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:24:14.659761 ignition[784]: kargs: kargs passed Mar 11 02:24:14.659820 ignition[784]: Ignition finished successfully Mar 11 02:24:14.706467 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 02:24:14.751570 ignition[792]: Ignition 2.19.0 Mar 11 02:24:14.751656 ignition[792]: Stage: disks Mar 11 02:24:14.752590 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:24:14.757701 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 02:24:14.752610 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:24:14.769673 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 02:24:14.755249 ignition[792]: disks: disks passed Mar 11 02:24:14.783263 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 02:24:14.755400 ignition[792]: Ignition finished successfully Mar 11 02:24:14.792847 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:24:14.800477 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:24:14.808125 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:24:14.830474 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 02:24:14.865394 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 11 02:24:14.871782 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 02:24:14.885129 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 02:24:15.110152 kernel: EXT4-fs (vda9): mounted filesystem ec53a244-36b1-4b02-8fe8-880c05c7af60 r/w with ordered data mode. Quota mode: none. Mar 11 02:24:15.112215 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 02:24:15.113225 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 02:24:15.144408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:24:15.157701 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 02:24:15.180251 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Mar 11 02:24:15.158518 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 11 02:24:15.158583 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 02:24:15.158620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:24:15.239112 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 02:24:15.273799 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:24:15.273828 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:24:15.273839 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:24:15.273857 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:24:15.275789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:24:15.309467 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 02:24:15.384569 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 02:24:15.405527 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 11 02:24:15.415913 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 02:24:15.426459 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 02:24:15.657836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 02:24:15.694246 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 02:24:15.712180 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 02:24:15.728111 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:24:15.735939 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 02:24:15.796319 systemd-networkd[780]: eth0: Gained IPv6LL Mar 11 02:24:15.808832 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 02:24:15.817505 ignition[923]: INFO : Ignition 2.19.0 Mar 11 02:24:15.817505 ignition[923]: INFO : Stage: mount Mar 11 02:24:15.817505 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:24:15.817505 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:24:15.817505 ignition[923]: INFO : mount: mount passed Mar 11 02:24:15.817505 ignition[923]: INFO : Ignition finished successfully Mar 11 02:24:15.863764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 02:24:15.886500 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 02:24:16.125504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:24:16.151463 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Mar 11 02:24:16.167558 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:24:16.167620 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:24:16.173415 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:24:16.193465 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:24:16.195760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:24:16.272634 ignition[954]: INFO : Ignition 2.19.0 Mar 11 02:24:16.272634 ignition[954]: INFO : Stage: files Mar 11 02:24:16.283508 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:24:16.283508 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:24:16.303461 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Mar 11 02:24:16.312847 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 02:24:16.312847 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 02:24:16.340936 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 02:24:16.351309 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 02:24:16.351309 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 02:24:16.351309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:24:16.351309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 11 02:24:16.342918 unknown[954]: wrote ssh authorized keys file for user: core Mar 11 02:24:16.465666 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 02:24:16.574885 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:24:16.574885 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:24:16.605632 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 11 02:24:16.852914 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 02:24:17.256077 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 11 02:24:17.271138 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 11 02:24:17.419249 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:24:17.419249 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:24:17.419249 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 11 02:24:17.419249 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 11 02:24:17.419249 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 02:24:17.419249 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:24:17.419249 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:24:17.419249 ignition[954]: INFO : files: files passed Mar 11 02:24:17.419249 ignition[954]: INFO : Ignition finished successfully Mar 11 02:24:17.335319 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 02:24:17.410544 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 02:24:17.420445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 02:24:17.439539 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 02:24:17.600907 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Mar 11 02:24:17.439685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 02:24:17.659670 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:24:17.659670 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:24:17.470292 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:24:17.716617 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:24:17.490713 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 02:24:17.541559 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 02:24:17.588259 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 02:24:17.588643 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 02:24:17.600931 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 02:24:17.601309 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 02:24:17.601499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 02:24:17.602832 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 02:24:17.637515 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:24:17.662476 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 02:24:17.706629 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:24:17.716605 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:24:17.725832 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 02:24:17.733162 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 02:24:17.733449 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:24:17.755482 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 02:24:17.763615 systemd[1]: Stopped target basic.target - Basic System. Mar 11 02:24:17.770742 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 02:24:17.789329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:24:17.798883 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 02:24:17.807834 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 02:24:17.812190 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:24:17.824862 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 02:24:17.835309 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 02:24:17.850757 systemd[1]: Stopped target swap.target - Swaps. Mar 11 02:24:17.868102 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 02:24:17.868431 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:24:17.884918 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:24:17.902670 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:24:17.916552 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 02:24:17.916939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:24:17.967205 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 02:24:17.967567 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 02:24:17.997148 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 02:24:17.997570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:24:18.004215 systemd[1]: Stopped target paths.target - Path Units. Mar 11 02:24:18.021629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 02:24:18.022418 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:24:18.040270 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 02:24:18.060799 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 02:24:18.075254 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 02:24:18.075568 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:24:18.101619 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 02:24:18.102071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:24:18.109885 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 02:24:18.110241 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:24:18.124223 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 02:24:18.412600 ignition[1005]: INFO : Ignition 2.19.0 Mar 11 02:24:18.412600 ignition[1005]: INFO : Stage: umount Mar 11 02:24:18.412600 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:24:18.412600 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:24:18.412600 ignition[1005]: INFO : umount: umount passed Mar 11 02:24:18.412600 ignition[1005]: INFO : Ignition finished successfully Mar 11 02:24:18.124551 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 02:24:18.195620 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 02:24:18.201499 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 02:24:18.201711 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:24:18.246553 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 02:24:18.261172 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 02:24:18.261328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:24:18.276606 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 02:24:18.276879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:24:18.415805 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 02:24:18.417170 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 02:24:18.417423 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 02:24:18.421692 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 02:24:18.421865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 02:24:18.453473 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 02:24:18.453719 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 02:24:18.456600 systemd[1]: Stopped target network.target - Network. Mar 11 02:24:18.478674 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 02:24:18.478791 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 02:24:18.495210 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 02:24:18.495294 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 02:24:18.516814 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 02:24:18.516911 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 02:24:18.531630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 02:24:18.531734 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 02:24:18.552213 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 02:24:18.552297 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 02:24:18.570616 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 02:24:18.590815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 02:24:18.643631 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 02:24:18.643844 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 02:24:18.651634 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 02:24:18.651727 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:24:18.688850 systemd-networkd[780]: eth0: DHCPv6 lease lost Mar 11 02:24:18.693881 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 02:24:18.694448 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 02:24:18.701887 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 02:24:18.701934 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:24:18.794755 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 02:24:18.813556 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 02:24:18.813630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:24:18.829912 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 02:24:18.830087 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:24:18.840661 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 02:24:18.840715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 02:24:18.854627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:24:18.901668 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 02:24:18.901868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 02:24:19.048246 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 02:24:19.048720 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:24:19.056890 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 02:24:19.057122 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 02:24:19.075938 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 02:24:19.076225 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:24:19.103305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 02:24:19.103486 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:24:19.119221 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 02:24:19.119296 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 02:24:19.139849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:24:19.139917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:24:19.181290 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 02:24:19.181434 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 02:24:19.181504 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:24:19.196548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:24:19.196629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:24:19.200621 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 02:24:19.200758 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 02:24:19.208583 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 02:24:19.223413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 02:24:19.243736 systemd[1]: Switching root. Mar 11 02:24:19.283411 systemd-journald[194]: Journal stopped Mar 11 02:24:20.705545 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 11 02:24:20.705651 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 02:24:20.705682 kernel: SELinux: policy capability open_perms=1 Mar 11 02:24:20.705706 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 02:24:20.705729 kernel: SELinux: policy capability always_check_network=0 Mar 11 02:24:20.705747 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 02:24:20.705758 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 02:24:20.705768 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 02:24:20.705778 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 02:24:20.705788 kernel: audit: type=1403 audit(1773195859.449:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 02:24:20.705800 systemd[1]: Successfully loaded SELinux policy in 52.379ms. Mar 11 02:24:20.705821 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.464ms. Mar 11 02:24:20.705835 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:24:20.705847 systemd[1]: Detected virtualization kvm. Mar 11 02:24:20.705858 systemd[1]: Detected architecture x86-64. Mar 11 02:24:20.705868 systemd[1]: Detected first boot. Mar 11 02:24:20.705879 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:24:20.705890 zram_generator::config[1053]: No configuration found. Mar 11 02:24:20.705901 systemd[1]: Populated /etc with preset unit settings. Mar 11 02:24:20.705912 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 02:24:20.705924 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 02:24:20.705935 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 02:24:20.705946 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 02:24:20.706012 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 02:24:20.706026 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 02:24:20.706037 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 02:24:20.706049 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 02:24:20.706059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 02:24:20.706070 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 02:24:20.706084 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 02:24:20.706095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:24:20.706106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:24:20.706116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 02:24:20.706128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 02:24:20.706139 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 02:24:20.706149 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:24:20.706160 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 11 02:24:20.706173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:24:20.706184 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 02:24:20.706194 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 02:24:20.706205 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 02:24:20.706215 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 02:24:20.706226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:24:20.706236 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:24:20.706247 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:24:20.706262 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:24:20.706272 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 02:24:20.706283 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 02:24:20.706293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:24:20.706304 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:24:20.706315 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:24:20.706325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 02:24:20.706336 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 02:24:20.706347 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 02:24:20.706396 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 02:24:20.706409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:24:20.706419 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 02:24:20.706430 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 02:24:20.706440 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 02:24:20.706452 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 02:24:20.706462 systemd[1]: Reached target machines.target - Containers. Mar 11 02:24:20.706473 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 02:24:20.706484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:24:20.706498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:24:20.706509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 02:24:20.706519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:24:20.706531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:24:20.706542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:24:20.706552 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 02:24:20.706571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:24:20.706593 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 02:24:20.706619 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 02:24:20.706641 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 02:24:20.706657 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 02:24:20.706668 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 02:24:20.706678 kernel: fuse: init (API version 7.39) Mar 11 02:24:20.706689 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:24:20.706699 kernel: loop: module loaded Mar 11 02:24:20.706709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:24:20.706720 kernel: ACPI: bus type drm_connector registered Mar 11 02:24:20.706733 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 02:24:20.706766 systemd-journald[1137]: Collecting audit messages is disabled. Mar 11 02:24:20.706791 systemd-journald[1137]: Journal started Mar 11 02:24:20.706810 systemd-journald[1137]: Runtime Journal (/run/log/journal/248eedcd25bf4c6b82780552f30e0d6e) is 6.0M, max 48.4M, 42.3M free. Mar 11 02:24:20.144996 systemd[1]: Queued start job for default target multi-user.target. Mar 11 02:24:20.167335 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 11 02:24:20.168303 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 02:24:20.168841 systemd[1]: systemd-journald.service: Consumed 2.967s CPU time. Mar 11 02:24:20.718824 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 02:24:20.727290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:24:20.732052 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 02:24:20.732097 systemd[1]: Stopped verity-setup.service. Mar 11 02:24:20.740125 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:24:20.747542 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:24:20.748808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 02:24:20.752501 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 02:24:20.756229 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 02:24:20.759651 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 02:24:20.763447 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 02:24:20.767179 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 02:24:20.770693 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 02:24:20.775126 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:24:20.779746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 02:24:20.780219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 02:24:20.784643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:24:20.784863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:24:20.789352 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:24:20.789611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:24:20.793612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:24:20.793836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:24:20.798436 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 02:24:20.798647 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 02:24:20.802769 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:24:20.803032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:24:20.807284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:24:20.811555 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 02:24:20.816244 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 02:24:20.833488 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 02:24:20.846179 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 02:24:20.851478 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 02:24:20.855396 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 02:24:20.855463 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:24:20.860212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 02:24:20.865657 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 02:24:20.870764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 02:24:20.874543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:24:20.876177 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 02:24:20.881693 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 02:24:20.886634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:24:20.893942 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 02:24:20.898635 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:24:20.900200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:24:20.909690 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 02:24:20.921899 systemd-journald[1137]: Time spent on flushing to /var/log/journal/248eedcd25bf4c6b82780552f30e0d6e is 28.836ms for 942 entries. Mar 11 02:24:20.921899 systemd-journald[1137]: System Journal (/var/log/journal/248eedcd25bf4c6b82780552f30e0d6e) is 8.0M, max 195.6M, 187.6M free. Mar 11 02:24:20.965734 systemd-journald[1137]: Received client request to flush runtime journal. Mar 11 02:24:20.965771 kernel: loop0: detected capacity change from 0 to 142488 Mar 11 02:24:20.929285 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 02:24:20.940930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:24:20.954247 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 02:24:20.959619 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 02:24:20.967259 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 02:24:20.974651 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 02:24:20.982554 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 02:24:20.990502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:24:20.996034 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 02:24:21.007338 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 02:24:21.017260 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 02:24:21.027082 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 02:24:21.034183 kernel: loop1: detected capacity change from 0 to 219192 Mar 11 02:24:21.035779 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 02:24:21.045232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:24:21.058554 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 02:24:21.060635 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 02:24:21.079450 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 11 02:24:21.095168 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 11 02:24:21.095701 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 11 02:24:21.103072 kernel: loop2: detected capacity change from 0 to 140768 Mar 11 02:24:21.103809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:24:21.155029 kernel: loop3: detected capacity change from 0 to 142488 Mar 11 02:24:21.178073 kernel: loop4: detected capacity change from 0 to 219192 Mar 11 02:24:21.200047 kernel: loop5: detected capacity change from 0 to 140768 Mar 11 02:24:21.212192 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 11 02:24:21.212934 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 11 02:24:21.217583 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 02:24:21.217630 systemd[1]: Reloading... Mar 11 02:24:21.298029 zram_generator::config[1221]: No configuration found. Mar 11 02:24:21.322879 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 02:24:21.421628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:24:21.474277 systemd[1]: Reloading finished in 256 ms. Mar 11 02:24:21.512797 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 02:24:21.517250 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 02:24:21.522119 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 02:24:21.553689 systemd[1]: Starting ensure-sysext.service... Mar 11 02:24:21.559531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:24:21.567680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:24:21.575560 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Mar 11 02:24:21.575610 systemd[1]: Reloading... Mar 11 02:24:21.589406 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 02:24:21.589766 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 02:24:21.590860 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 02:24:21.591237 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 11 02:24:21.591341 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 11 02:24:21.596674 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:24:21.596683 systemd-tmpfiles[1256]: Skipping /boot Mar 11 02:24:21.612237 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:24:21.612250 systemd-tmpfiles[1256]: Skipping /boot Mar 11 02:24:21.632297 systemd-udevd[1257]: Using default interface naming scheme 'v255'. Mar 11 02:24:21.653063 zram_generator::config[1286]: No configuration found. Mar 11 02:24:21.754092 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1314) Mar 11 02:24:21.800032 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 11 02:24:21.821027 kernel: ACPI: button: Power Button [PWRF] Mar 11 02:24:21.823544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:24:21.908083 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 11 02:24:21.934257 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 11 02:24:21.940231 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 11 02:24:21.940671 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 11 02:24:21.967694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:24:21.972549 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 11 02:24:21.974448 systemd[1]: Reloading finished in 398 ms. Mar 11 02:24:22.012024 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 02:24:22.036575 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:24:22.037121 kernel: kvm_amd: TSC scaling supported Mar 11 02:24:22.037161 kernel: kvm_amd: Nested Virtualization enabled Mar 11 02:24:22.037181 kernel: kvm_amd: Nested Paging enabled Mar 11 02:24:22.037230 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 11 02:24:22.042453 kernel: kvm_amd: PMU virtualization is disabled Mar 11 02:24:22.093702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:24:22.117103 kernel: EDAC MC: Ver: 3.0.0 Mar 11 02:24:22.123140 systemd[1]: Finished ensure-sysext.service. Mar 11 02:24:22.155313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:24:22.166302 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:24:22.173249 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 02:24:22.177708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:24:22.179843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:24:22.187165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:24:22.201322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:24:22.208343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:24:22.213320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:24:22.215541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 02:24:22.225250 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 02:24:22.236219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:24:22.244337 augenrules[1377]: No rules Mar 11 02:24:22.254281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:24:22.266259 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 11 02:24:22.273081 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 02:24:22.278558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:24:22.282181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:24:22.283479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 02:24:22.288057 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:24:22.292284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:24:22.292561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:24:22.297639 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:24:22.297853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:24:22.301868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:24:22.302426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:24:22.307661 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:24:22.307923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:24:22.312235 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 02:24:22.313224 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 02:24:22.336341 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 02:24:22.341728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:24:22.341861 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:24:22.343912 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 02:24:22.350260 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:24:22.354238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 02:24:22.355244 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 02:24:22.356195 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 02:24:22.363177 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 02:24:22.373475 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 02:24:22.395677 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 02:24:22.506141 systemd-networkd[1375]: lo: Link UP Mar 11 02:24:22.506657 systemd-networkd[1375]: lo: Gained carrier Mar 11 02:24:22.509756 systemd-networkd[1375]: Enumeration completed Mar 11 02:24:22.511238 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:24:22.511284 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:24:22.513558 systemd-networkd[1375]: eth0: Link UP Mar 11 02:24:22.513570 systemd-networkd[1375]: eth0: Gained carrier Mar 11 02:24:22.513594 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:24:22.523426 systemd-resolved[1383]: Positive Trust Anchors: Mar 11 02:24:22.523598 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:24:22.524727 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:24:22.532158 systemd-resolved[1383]: Defaulting to hostname 'linux'. Mar 11 02:24:22.534152 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:24:22.535440 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 11 02:24:22.537213 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 11 02:24:22.537295 systemd-timesyncd[1384]: Initial clock synchronization to Wed 2026-03-11 02:24:22.873643 UTC. Mar 11 02:24:22.595858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 02:24:22.607466 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 11 02:24:22.618871 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:24:22.626091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:24:22.632266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:24:22.640025 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:24:22.645175 systemd[1]: Reached target network.target - Network. Mar 11 02:24:22.648581 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:24:22.652882 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:24:22.658099 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 02:24:22.664229 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 02:24:22.670078 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 02:24:22.675839 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 02:24:22.675908 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:24:22.680147 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 02:24:22.685235 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 02:24:22.690460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 02:24:22.695905 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:24:22.701302 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 02:24:22.707760 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 02:24:22.724083 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 02:24:22.729102 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 02:24:22.734599 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 02:24:22.739419 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 02:24:22.743709 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:24:22.747594 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:24:22.751296 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:24:22.751176 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:24:22.751202 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:24:22.752726 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 02:24:22.767208 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 02:24:22.772863 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 02:24:22.778195 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 02:24:22.781900 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 02:24:22.783879 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 02:24:22.787926 jq[1422]: false Mar 11 02:24:22.791136 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 02:24:22.796112 dbus-daemon[1421]: [system] SELinux support is enabled Mar 11 02:24:22.797946 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 02:24:22.805670 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 02:24:22.816420 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 02:24:22.820164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 02:24:22.820737 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 02:24:22.824173 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 02:24:22.828646 extend-filesystems[1423]: Found loop3 Mar 11 02:24:22.828646 extend-filesystems[1423]: Found loop4 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found loop5 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found sr0 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda1 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda2 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda3 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found usr Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda4 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda6 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda7 Mar 11 02:24:22.835263 extend-filesystems[1423]: Found vda9 Mar 11 02:24:22.835263 extend-filesystems[1423]: Checking size of /dev/vda9 Mar 11 02:24:22.905554 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 11 02:24:22.905663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1304) Mar 11 02:24:22.834365 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 02:24:22.905845 extend-filesystems[1423]: Resized partition /dev/vda9 Mar 11 02:24:22.854621 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 02:24:22.909298 update_engine[1435]: I20260311 02:24:22.850880 1435 main.cc:92] Flatcar Update Engine starting Mar 11 02:24:22.909298 update_engine[1435]: I20260311 02:24:22.852787 1435 update_check_scheduler.cc:74] Next update check in 10m54s Mar 11 02:24:22.909760 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Mar 11 02:24:22.883696 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 02:24:22.914101 jq[1437]: true Mar 11 02:24:22.917612 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 02:24:22.917904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 02:24:22.918431 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 02:24:22.918702 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 02:24:22.936209 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 02:24:22.936621 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 02:24:22.947238 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 11 02:24:22.958865 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 02:24:22.972570 jq[1448]: true Mar 11 02:24:22.972832 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 11 02:24:22.972832 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 11 02:24:22.972832 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 11 02:24:22.988870 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Mar 11 02:24:22.982366 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (Power Button) Mar 11 02:24:22.982461 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 11 02:24:22.984693 systemd-logind[1434]: New seat seat0. Mar 11 02:24:22.987749 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 02:24:22.999599 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 02:24:23.000064 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 02:24:23.012532 tar[1447]: linux-amd64/LICENSE Mar 11 02:24:23.012532 tar[1447]: linux-amd64/helm Mar 11 02:24:23.020523 systemd[1]: Started update-engine.service - Update Engine. Mar 11 02:24:23.033447 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Mar 11 02:24:23.034570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 02:24:23.034721 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 02:24:23.042851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 02:24:23.043213 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 02:24:23.058464 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 02:24:23.071637 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 02:24:23.083230 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 02:24:23.128787 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 02:24:23.206166 containerd[1449]: time="2026-03-11T02:24:23.201557796Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 02:24:23.229467 containerd[1449]: time="2026-03-11T02:24:23.229418700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.232494 containerd[1449]: time="2026-03-11T02:24:23.232294757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:24:23.233379 containerd[1449]: time="2026-03-11T02:24:23.233244145Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 02:24:23.233379 containerd[1449]: time="2026-03-11T02:24:23.233302141Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 02:24:23.233557 containerd[1449]: time="2026-03-11T02:24:23.233491422Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 02:24:23.233557 containerd[1449]: time="2026-03-11T02:24:23.233540471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.233667 containerd[1449]: time="2026-03-11T02:24:23.233616193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:24:23.233667 containerd[1449]: time="2026-03-11T02:24:23.233654865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234034 containerd[1449]: time="2026-03-11T02:24:23.233909471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234034 containerd[1449]: time="2026-03-11T02:24:23.233951816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234034 containerd[1449]: time="2026-03-11T02:24:23.233968719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234034 containerd[1449]: time="2026-03-11T02:24:23.233980976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234185 containerd[1449]: time="2026-03-11T02:24:23.234129384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234521 containerd[1449]: time="2026-03-11T02:24:23.234482389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234712 containerd[1449]: time="2026-03-11T02:24:23.234680387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:24:23.234738 containerd[1449]: time="2026-03-11T02:24:23.234713317Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 02:24:23.235013 containerd[1449]: time="2026-03-11T02:24:23.234916274Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 02:24:23.235076 containerd[1449]: time="2026-03-11T02:24:23.235059661Z" level=info msg="metadata content store policy set" policy=shared Mar 11 02:24:23.241136 containerd[1449]: time="2026-03-11T02:24:23.240714459Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 02:24:23.241136 containerd[1449]: time="2026-03-11T02:24:23.240795047Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 02:24:23.241136 containerd[1449]: time="2026-03-11T02:24:23.240813631Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 02:24:23.241136 containerd[1449]: time="2026-03-11T02:24:23.240827965Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 02:24:23.241136 containerd[1449]: time="2026-03-11T02:24:23.240846371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 02:24:23.241136 containerd[1449]: time="2026-03-11T02:24:23.241048087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 02:24:23.241318 containerd[1449]: time="2026-03-11T02:24:23.241300625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 02:24:23.241465 containerd[1449]: time="2026-03-11T02:24:23.241414341Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 02:24:23.241493 containerd[1449]: time="2026-03-11T02:24:23.241485668Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 02:24:23.241512 containerd[1449]: time="2026-03-11T02:24:23.241499700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 02:24:23.241529 containerd[1449]: time="2026-03-11T02:24:23.241512698Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241529 containerd[1449]: time="2026-03-11T02:24:23.241524130Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241600 containerd[1449]: time="2026-03-11T02:24:23.241534738Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241600 containerd[1449]: time="2026-03-11T02:24:23.241546493Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241600 containerd[1449]: time="2026-03-11T02:24:23.241559408Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241600 containerd[1449]: time="2026-03-11T02:24:23.241570934Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241600 containerd[1449]: time="2026-03-11T02:24:23.241581594Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241600 containerd[1449]: time="2026-03-11T02:24:23.241591679Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241609448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241622290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241633074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241643911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241654613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241666107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241684430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241694 containerd[1449]: time="2026-03-11T02:24:23.241695549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241706343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241719112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241729520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241739857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241750089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241768087Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241786244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241797205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.241823 containerd[1449]: time="2026-03-11T02:24:23.241806685Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241876102Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241893120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241903299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241914010Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241922593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241933377Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241943682Z" level=info msg="NRI interface is disabled by configuration." Mar 11 02:24:23.242211 containerd[1449]: time="2026-03-11T02:24:23.241958507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 02:24:23.242342 containerd[1449]: time="2026-03-11T02:24:23.242260835Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 02:24:23.242342 containerd[1449]: time="2026-03-11T02:24:23.242312587Z" level=info msg="Connect containerd service" Mar 11 02:24:23.242342 containerd[1449]: time="2026-03-11T02:24:23.242341674Z" level=info msg="using legacy CRI server" Mar 11 02:24:23.242342 containerd[1449]: time="2026-03-11T02:24:23.242348272Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 02:24:23.242551 containerd[1449]: time="2026-03-11T02:24:23.242416217Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 02:24:23.243092 containerd[1449]: time="2026-03-11T02:24:23.243066185Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 02:24:23.243550 containerd[1449]: time="2026-03-11T02:24:23.243428012Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 02:24:23.243584 containerd[1449]: time="2026-03-11T02:24:23.243568663Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 02:24:23.243604 containerd[1449]: time="2026-03-11T02:24:23.243430763Z" level=info msg="Start subscribing containerd event" Mar 11 02:24:23.243622 containerd[1449]: time="2026-03-11T02:24:23.243609244Z" level=info msg="Start recovering state" Mar 11 02:24:23.243839 containerd[1449]: time="2026-03-11T02:24:23.243663084Z" level=info msg="Start event monitor" Mar 11 02:24:23.243839 containerd[1449]: time="2026-03-11T02:24:23.243682200Z" level=info msg="Start snapshots syncer" Mar 11 02:24:23.243839 containerd[1449]: time="2026-03-11T02:24:23.243690729Z" level=info msg="Start cni network conf syncer for default" Mar 11 02:24:23.243839 containerd[1449]: time="2026-03-11T02:24:23.243697652Z" level=info msg="Start streaming server" Mar 11 02:24:23.243834 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 02:24:23.247574 containerd[1449]: time="2026-03-11T02:24:23.243982295Z" level=info msg="containerd successfully booted in 0.043712s" Mar 11 02:24:23.320630 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 02:24:23.355377 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 02:24:23.367486 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 02:24:23.389978 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 02:24:23.390481 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 02:24:23.407514 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 02:24:23.423066 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 02:24:23.431384 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 02:24:23.436642 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 11 02:24:23.441142 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 02:24:23.547929 tar[1447]: linux-amd64/README.md Mar 11 02:24:23.569981 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 02:24:23.925795 systemd-networkd[1375]: eth0: Gained IPv6LL Mar 11 02:24:23.930142 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 02:24:23.936862 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 02:24:23.951557 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 11 02:24:23.959130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:23.965554 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 02:24:24.007657 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 02:24:24.013733 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 11 02:24:24.014140 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 11 02:24:24.021097 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 02:24:24.626728 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 02:24:24.641620 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:50678.service - OpenSSH per-connection server daemon (10.0.0.1:50678). Mar 11 02:24:24.773412 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 50678 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:24.776779 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:24.796422 systemd-logind[1434]: New session 1 of user core. Mar 11 02:24:24.799151 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 02:24:24.821816 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 02:24:24.841370 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 02:24:24.850400 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 02:24:24.862932 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 02:24:25.045315 systemd[1535]: Queued start job for default target default.target. Mar 11 02:24:25.058903 systemd[1535]: Created slice app.slice - User Application Slice. Mar 11 02:24:25.059046 systemd[1535]: Reached target paths.target - Paths. Mar 11 02:24:25.059068 systemd[1535]: Reached target timers.target - Timers. Mar 11 02:24:25.061581 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 02:24:25.085497 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 02:24:25.085768 systemd[1535]: Reached target sockets.target - Sockets. Mar 11 02:24:25.085797 systemd[1535]: Reached target basic.target - Basic System. Mar 11 02:24:25.085921 systemd[1535]: Reached target default.target - Main User Target. Mar 11 02:24:25.086141 systemd[1535]: Startup finished in 212ms. Mar 11 02:24:25.086221 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 02:24:25.100422 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 02:24:25.147747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:25.154807 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 02:24:25.161410 systemd[1]: Startup finished in 3.449s (kernel) + 11.559s (initrd) + 5.762s (userspace) = 20.771s. Mar 11 02:24:25.163229 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:24:25.171057 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Mar 11 02:24:25.316147 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:25.317291 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:25.325473 systemd-logind[1434]: New session 2 of user core. Mar 11 02:24:25.335406 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 02:24:25.402193 sshd[1552]: pam_unix(sshd:session): session closed for user core Mar 11 02:24:25.419239 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:50688.service: Deactivated successfully. Mar 11 02:24:25.421474 systemd[1]: session-2.scope: Deactivated successfully. Mar 11 02:24:25.424474 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Mar 11 02:24:25.431946 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:50696.service - OpenSSH per-connection server daemon (10.0.0.1:50696). Mar 11 02:24:25.433778 systemd-logind[1434]: Removed session 2. Mar 11 02:24:25.492469 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 50696 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:25.495077 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:25.502284 systemd-logind[1434]: New session 3 of user core. Mar 11 02:24:25.517525 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 02:24:25.579723 sshd[1569]: pam_unix(sshd:session): session closed for user core Mar 11 02:24:25.592578 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:50696.service: Deactivated successfully. Mar 11 02:24:25.594532 systemd[1]: session-3.scope: Deactivated successfully. Mar 11 02:24:25.597329 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. Mar 11 02:24:25.610612 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:50702.service - OpenSSH per-connection server daemon (10.0.0.1:50702). Mar 11 02:24:25.613181 systemd-logind[1434]: Removed session 3. Mar 11 02:24:25.662250 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 50702 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:25.666360 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:25.676759 systemd-logind[1434]: New session 4 of user core. Mar 11 02:24:25.686419 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 02:24:25.755291 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 11 02:24:25.768303 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:50702.service: Deactivated successfully. Mar 11 02:24:25.774316 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 02:24:25.778498 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Mar 11 02:24:25.788052 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:50716.service - OpenSSH per-connection server daemon (10.0.0.1:50716). Mar 11 02:24:25.789344 systemd-logind[1434]: Removed session 4. Mar 11 02:24:25.827541 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 50716 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:25.829880 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:25.840786 systemd-logind[1434]: New session 5 of user core. Mar 11 02:24:25.850750 kubelet[1549]: E0311 02:24:25.847970 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:24:25.851771 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 02:24:25.856084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:24:25.856305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:24:25.856792 systemd[1]: kubelet.service: Consumed 1.200s CPU time. Mar 11 02:24:25.928293 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 02:24:25.928822 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:24:25.950428 sudo[1589]: pam_unix(sudo:session): session closed for user root Mar 11 02:24:25.954169 sshd[1585]: pam_unix(sshd:session): session closed for user core Mar 11 02:24:25.968541 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:50716.service: Deactivated successfully. Mar 11 02:24:25.970896 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 02:24:25.973474 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Mar 11 02:24:25.975244 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:50718.service - OpenSSH per-connection server daemon (10.0.0.1:50718). Mar 11 02:24:25.976583 systemd-logind[1434]: Removed session 5. Mar 11 02:24:26.042749 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 50718 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:26.044923 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:26.051842 systemd-logind[1434]: New session 6 of user core. Mar 11 02:24:26.063310 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 02:24:26.124660 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 02:24:26.125311 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:24:26.131244 sudo[1598]: pam_unix(sudo:session): session closed for user root Mar 11 02:24:26.141944 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 11 02:24:26.142672 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:24:26.169503 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 11 02:24:26.172875 auditctl[1601]: No rules Mar 11 02:24:26.174334 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 02:24:26.174684 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 11 02:24:26.177380 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:24:26.225518 augenrules[1619]: No rules Mar 11 02:24:26.227772 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:24:26.229410 sudo[1597]: pam_unix(sudo:session): session closed for user root Mar 11 02:24:26.232547 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 11 02:24:26.245735 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:50718.service: Deactivated successfully. Mar 11 02:24:26.247933 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 02:24:26.250491 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Mar 11 02:24:26.263650 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:50734.service - OpenSSH per-connection server daemon (10.0.0.1:50734). Mar 11 02:24:26.265354 systemd-logind[1434]: Removed session 6. Mar 11 02:24:26.328570 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 50734 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:24:26.330864 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:24:26.338306 systemd-logind[1434]: New session 7 of user core. Mar 11 02:24:26.353437 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 02:24:26.416463 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 02:24:26.416864 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:24:26.792454 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 02:24:26.792858 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 02:24:27.125715 dockerd[1648]: time="2026-03-11T02:24:27.125517847Z" level=info msg="Starting up" Mar 11 02:24:27.426402 dockerd[1648]: time="2026-03-11T02:24:27.426085945Z" level=info msg="Loading containers: start." Mar 11 02:24:27.606079 kernel: Initializing XFRM netlink socket Mar 11 02:24:27.745914 systemd-networkd[1375]: docker0: Link UP Mar 11 02:24:27.783476 dockerd[1648]: time="2026-03-11T02:24:27.783381507Z" level=info msg="Loading containers: done." Mar 11 02:24:27.804714 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3154100663-merged.mount: Deactivated successfully. Mar 11 02:24:27.808426 dockerd[1648]: time="2026-03-11T02:24:27.808304256Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 02:24:27.808628 dockerd[1648]: time="2026-03-11T02:24:27.808543767Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 02:24:27.808854 dockerd[1648]: time="2026-03-11T02:24:27.808766412Z" level=info msg="Daemon has completed initialization" Mar 11 02:24:27.869925 dockerd[1648]: time="2026-03-11T02:24:27.869252446Z" level=info msg="API listen on /run/docker.sock" Mar 11 02:24:27.869752 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 02:24:28.516142 containerd[1449]: time="2026-03-11T02:24:28.516049812Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 11 02:24:29.116516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479577506.mount: Deactivated successfully. Mar 11 02:24:30.579035 containerd[1449]: time="2026-03-11T02:24:30.578907248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:30.579926 containerd[1449]: time="2026-03-11T02:24:30.579837844Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 11 02:24:30.581775 containerd[1449]: time="2026-03-11T02:24:30.581632377Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:30.587116 containerd[1449]: time="2026-03-11T02:24:30.586821287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:30.588758 containerd[1449]: time="2026-03-11T02:24:30.588684253Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.072580146s" Mar 11 02:24:30.588758 containerd[1449]: time="2026-03-11T02:24:30.588744850Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 11 02:24:30.590068 containerd[1449]: time="2026-03-11T02:24:30.589839933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 11 02:24:32.334656 containerd[1449]: time="2026-03-11T02:24:32.334579768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:32.339683 containerd[1449]: time="2026-03-11T02:24:32.339620356Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 11 02:24:32.340946 containerd[1449]: time="2026-03-11T02:24:32.340891498Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:32.343946 containerd[1449]: time="2026-03-11T02:24:32.343891160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:32.345125 containerd[1449]: time="2026-03-11T02:24:32.345056708Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.755183617s" Mar 11 02:24:32.345125 containerd[1449]: time="2026-03-11T02:24:32.345105193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 11 02:24:32.345739 containerd[1449]: time="2026-03-11T02:24:32.345691600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 11 02:24:32.482335 kernel: hrtimer: interrupt took 3919761 ns Mar 11 02:24:33.550038 containerd[1449]: time="2026-03-11T02:24:33.549910910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:33.550911 containerd[1449]: time="2026-03-11T02:24:33.550780376Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 11 02:24:33.552303 containerd[1449]: time="2026-03-11T02:24:33.552248262Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:33.556114 containerd[1449]: time="2026-03-11T02:24:33.556037644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:33.558722 containerd[1449]: time="2026-03-11T02:24:33.558590812Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.212872929s" Mar 11 02:24:33.558722 containerd[1449]: time="2026-03-11T02:24:33.558626134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 11 02:24:33.559582 containerd[1449]: time="2026-03-11T02:24:33.559537033Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 11 02:24:34.571628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917807885.mount: Deactivated successfully. Mar 11 02:24:34.881341 containerd[1449]: time="2026-03-11T02:24:34.880926041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:34.882403 containerd[1449]: time="2026-03-11T02:24:34.882350691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 11 02:24:34.883649 containerd[1449]: time="2026-03-11T02:24:34.883608947Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:34.887288 containerd[1449]: time="2026-03-11T02:24:34.887199031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:34.887887 containerd[1449]: time="2026-03-11T02:24:34.887798412Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.328225328s" Mar 11 02:24:34.887887 containerd[1449]: time="2026-03-11T02:24:34.887845561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 11 02:24:34.888713 containerd[1449]: time="2026-03-11T02:24:34.888632970Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 11 02:24:35.402467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639406887.mount: Deactivated successfully. Mar 11 02:24:36.106724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 02:24:36.121217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:36.318787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:36.338686 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:24:36.395463 kubelet[1929]: E0311 02:24:36.395092 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:24:36.400598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:24:36.400848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:24:36.837085 containerd[1449]: time="2026-03-11T02:24:36.836888107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:36.838207 containerd[1449]: time="2026-03-11T02:24:36.838139039Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 11 02:24:36.839379 containerd[1449]: time="2026-03-11T02:24:36.839325880Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:36.843075 containerd[1449]: time="2026-03-11T02:24:36.843033397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:36.844336 containerd[1449]: time="2026-03-11T02:24:36.844268069Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.955587949s" Mar 11 02:24:36.844336 containerd[1449]: time="2026-03-11T02:24:36.844321017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 11 02:24:36.845136 containerd[1449]: time="2026-03-11T02:24:36.845094696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 02:24:37.279742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306379325.mount: Deactivated successfully. Mar 11 02:24:37.289069 containerd[1449]: time="2026-03-11T02:24:37.288845236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:37.290333 containerd[1449]: time="2026-03-11T02:24:37.290256232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 11 02:24:37.292114 containerd[1449]: time="2026-03-11T02:24:37.292050680Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:37.294770 containerd[1449]: time="2026-03-11T02:24:37.294699754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:37.295592 containerd[1449]: time="2026-03-11T02:24:37.295535361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 450.39807ms" Mar 11 02:24:37.295649 containerd[1449]: time="2026-03-11T02:24:37.295590266Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 11 02:24:37.298091 containerd[1449]: time="2026-03-11T02:24:37.297899882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 11 02:24:37.847482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669076949.mount: Deactivated successfully. Mar 11 02:24:38.812251 containerd[1449]: time="2026-03-11T02:24:38.812137044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:38.813455 containerd[1449]: time="2026-03-11T02:24:38.813378470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 11 02:24:38.814866 containerd[1449]: time="2026-03-11T02:24:38.814798045Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:38.818602 containerd[1449]: time="2026-03-11T02:24:38.818508789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:38.820469 containerd[1449]: time="2026-03-11T02:24:38.820402785Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.522451904s" Mar 11 02:24:38.820469 containerd[1449]: time="2026-03-11T02:24:38.820459923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 11 02:24:42.478525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:42.492660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:42.542342 systemd[1]: Reloading requested from client PID 2037 ('systemctl') (unit session-7.scope)... Mar 11 02:24:42.542399 systemd[1]: Reloading... Mar 11 02:24:42.668080 zram_generator::config[2079]: No configuration found. Mar 11 02:24:42.856493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:24:42.937115 systemd[1]: Reloading finished in 393 ms. Mar 11 02:24:43.016033 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 02:24:43.016161 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 02:24:43.016550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:43.019209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:43.208707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:43.239816 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:24:43.332071 kubelet[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:24:43.332071 kubelet[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:24:43.332447 kubelet[2125]: I0311 02:24:43.332077 2125 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:24:43.656090 kubelet[2125]: I0311 02:24:43.655849 2125 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 02:24:43.656090 kubelet[2125]: I0311 02:24:43.655902 2125 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:24:43.657725 kubelet[2125]: I0311 02:24:43.657642 2125 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:24:43.657725 kubelet[2125]: I0311 02:24:43.657700 2125 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:24:43.658230 kubelet[2125]: I0311 02:24:43.658168 2125 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:24:43.764513 kubelet[2125]: E0311 02:24:43.764368 2125 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 02:24:43.766183 kubelet[2125]: I0311 02:24:43.766078 2125 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:24:43.777329 kubelet[2125]: E0311 02:24:43.777188 2125 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:24:43.777329 kubelet[2125]: I0311 02:24:43.777325 2125 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:24:43.788680 kubelet[2125]: I0311 02:24:43.788592 2125 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:24:43.790626 kubelet[2125]: I0311 02:24:43.790499 2125 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:24:43.790720 kubelet[2125]: I0311 02:24:43.790555 2125 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:24:43.790720 kubelet[2125]: I0311 02:24:43.790706 2125 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:24:43.790720 kubelet[2125]: I0311 02:24:43.790715 2125 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 02:24:43.791071 kubelet[2125]: I0311 02:24:43.790818 2125 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:24:43.794549 kubelet[2125]: I0311 02:24:43.794508 2125 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:43.795114 kubelet[2125]: I0311 02:24:43.794845 2125 kubelet.go:475] "Attempting to sync node with API server" Mar 11 02:24:43.795114 kubelet[2125]: I0311 02:24:43.794936 2125 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:24:43.795510 kubelet[2125]: I0311 02:24:43.795281 2125 kubelet.go:387] "Adding apiserver pod source" Mar 11 02:24:43.795510 kubelet[2125]: I0311 02:24:43.795366 2125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:24:43.797680 kubelet[2125]: E0311 02:24:43.796486 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:24:43.798569 kubelet[2125]: E0311 02:24:43.797778 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:24:43.800151 kubelet[2125]: I0311 02:24:43.799809 2125 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:24:43.800579 kubelet[2125]: I0311 02:24:43.800523 2125 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:24:43.800579 kubelet[2125]: I0311 02:24:43.800571 2125 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:24:43.800639 kubelet[2125]: W0311 02:24:43.800627 2125 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 02:24:43.806290 kubelet[2125]: I0311 02:24:43.806212 2125 server.go:1262] "Started kubelet" Mar 11 02:24:43.807292 kubelet[2125]: I0311 02:24:43.807213 2125 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:24:43.807292 kubelet[2125]: I0311 02:24:43.807286 2125 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:24:43.808084 kubelet[2125]: I0311 02:24:43.807795 2125 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:24:43.808154 kubelet[2125]: I0311 02:24:43.808137 2125 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:24:43.813650 kubelet[2125]: I0311 02:24:43.812209 2125 server.go:310] "Adding debug handlers to kubelet server" Mar 11 02:24:43.813650 kubelet[2125]: I0311 02:24:43.812379 2125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:24:43.814709 kubelet[2125]: I0311 02:24:43.814634 2125 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:24:43.815318 kubelet[2125]: E0311 02:24:43.813199 2125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189ba842f37279d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:24:43.80609583 +0000 UTC m=+0.558350060,LastTimestamp:2026-03-11 02:24:43.80609583 +0000 UTC m=+0.558350060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:24:43.816378 kubelet[2125]: E0311 02:24:43.815648 2125 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:43.816378 kubelet[2125]: I0311 02:24:43.815680 2125 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 02:24:43.816781 kubelet[2125]: I0311 02:24:43.816540 2125 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:24:43.817099 kubelet[2125]: I0311 02:24:43.817030 2125 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:24:43.820829 kubelet[2125]: E0311 02:24:43.820755 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Mar 11 02:24:43.820914 kubelet[2125]: E0311 02:24:43.820901 2125 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:24:43.821846 kubelet[2125]: E0311 02:24:43.821729 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:24:43.826440 kubelet[2125]: I0311 02:24:43.826327 2125 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:24:43.826440 kubelet[2125]: I0311 02:24:43.826394 2125 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:24:43.826522 kubelet[2125]: I0311 02:24:43.826467 2125 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:24:43.857724 kubelet[2125]: I0311 02:24:43.857519 2125 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:24:43.857724 kubelet[2125]: I0311 02:24:43.857651 2125 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:24:43.857724 kubelet[2125]: I0311 02:24:43.857673 2125 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:43.863226 kubelet[2125]: I0311 02:24:43.863154 2125 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:24:43.865619 kubelet[2125]: I0311 02:24:43.865541 2125 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:24:43.865619 kubelet[2125]: I0311 02:24:43.865612 2125 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 02:24:43.865734 kubelet[2125]: I0311 02:24:43.865651 2125 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 02:24:43.866304 kubelet[2125]: E0311 02:24:43.865751 2125 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:24:43.866304 kubelet[2125]: I0311 02:24:43.865847 2125 policy_none.go:49] "None policy: Start" Mar 11 02:24:43.866304 kubelet[2125]: I0311 02:24:43.865863 2125 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:24:43.866304 kubelet[2125]: I0311 02:24:43.865874 2125 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:24:43.866513 kubelet[2125]: E0311 02:24:43.866453 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:24:43.867761 kubelet[2125]: I0311 02:24:43.867700 2125 policy_none.go:47] "Start" Mar 11 02:24:43.874842 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 02:24:43.893069 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 02:24:43.899066 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 02:24:43.910338 kubelet[2125]: E0311 02:24:43.909575 2125 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:24:43.910338 kubelet[2125]: I0311 02:24:43.909794 2125 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:24:43.910338 kubelet[2125]: I0311 02:24:43.909806 2125 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:24:43.910338 kubelet[2125]: I0311 02:24:43.910109 2125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:24:43.911726 kubelet[2125]: E0311 02:24:43.911679 2125 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:24:43.911791 kubelet[2125]: E0311 02:24:43.911746 2125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 02:24:43.981035 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 11 02:24:43.992323 kubelet[2125]: E0311 02:24:43.992188 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:43.994799 systemd[1]: Created slice kubepods-burstable-pod328f6c372c7979059974ab052964c8bc.slice - libcontainer container kubepods-burstable-pod328f6c372c7979059974ab052964c8bc.slice. Mar 11 02:24:44.007272 kubelet[2125]: E0311 02:24:44.007179 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:44.011840 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 11 02:24:44.012999 kubelet[2125]: I0311 02:24:44.012801 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:44.013448 kubelet[2125]: E0311 02:24:44.013398 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Mar 11 02:24:44.014913 kubelet[2125]: E0311 02:24:44.014841 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:44.018374 kubelet[2125]: I0311 02:24:44.018322 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:44.018444 kubelet[2125]: I0311 02:24:44.018385 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:44.018444 kubelet[2125]: I0311 02:24:44.018412 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:44.018444 kubelet[2125]: I0311 02:24:44.018435 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:44.018614 kubelet[2125]: I0311 02:24:44.018457 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/328f6c372c7979059974ab052964c8bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"328f6c372c7979059974ab052964c8bc\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:44.018614 kubelet[2125]: I0311 02:24:44.018488 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/328f6c372c7979059974ab052964c8bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"328f6c372c7979059974ab052964c8bc\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:44.018614 kubelet[2125]: I0311 02:24:44.018553 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/328f6c372c7979059974ab052964c8bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"328f6c372c7979059974ab052964c8bc\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:44.018614 kubelet[2125]: I0311 02:24:44.018603 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:44.018767 kubelet[2125]: I0311 02:24:44.018708 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:44.022309 kubelet[2125]: E0311 02:24:44.022278 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Mar 11 02:24:44.215752 kubelet[2125]: I0311 02:24:44.215539 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:44.216359 kubelet[2125]: E0311 02:24:44.216196 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Mar 11 02:24:44.298482 kubelet[2125]: E0311 02:24:44.298331 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:44.299922 containerd[1449]: time="2026-03-11T02:24:44.299805454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:44.312938 kubelet[2125]: E0311 02:24:44.312685 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:44.313675 containerd[1449]: time="2026-03-11T02:24:44.313603826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:328f6c372c7979059974ab052964c8bc,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:44.323996 kubelet[2125]: E0311 02:24:44.323916 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:44.325108 containerd[1449]: time="2026-03-11T02:24:44.324886618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:44.423690 kubelet[2125]: E0311 02:24:44.423482 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Mar 11 02:24:44.619354 kubelet[2125]: I0311 02:24:44.619159 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:44.619801 kubelet[2125]: E0311 02:24:44.619695 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Mar 11 02:24:44.732788 kubelet[2125]: E0311 02:24:44.732624 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:24:44.747665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711666106.mount: Deactivated successfully. Mar 11 02:24:44.759601 containerd[1449]: time="2026-03-11T02:24:44.759448756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:44.762443 containerd[1449]: time="2026-03-11T02:24:44.762337698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:24:44.764341 containerd[1449]: time="2026-03-11T02:24:44.764254601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:44.765690 containerd[1449]: time="2026-03-11T02:24:44.765602416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 11 02:24:44.767664 containerd[1449]: time="2026-03-11T02:24:44.767566314Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:44.770230 containerd[1449]: time="2026-03-11T02:24:44.770116534Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:44.770353 containerd[1449]: time="2026-03-11T02:24:44.770257373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:24:44.773263 containerd[1449]: time="2026-03-11T02:24:44.773105295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:44.778048 containerd[1449]: time="2026-03-11T02:24:44.777898877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.215349ms" Mar 11 02:24:44.780369 containerd[1449]: time="2026-03-11T02:24:44.780278783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.353853ms" Mar 11 02:24:44.783902 containerd[1449]: time="2026-03-11T02:24:44.783839757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.794209ms" Mar 11 02:24:44.822695 kubelet[2125]: E0311 02:24:44.822543 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:24:44.911883 kubelet[2125]: E0311 02:24:44.911672 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:24:44.922293 containerd[1449]: time="2026-03-11T02:24:44.922067954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.922844 containerd[1449]: time="2026-03-11T02:24:44.922387006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.922844 containerd[1449]: time="2026-03-11T02:24:44.922478299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.922844 containerd[1449]: time="2026-03-11T02:24:44.922502224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.922844 containerd[1449]: time="2026-03-11T02:24:44.922644038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.925307 containerd[1449]: time="2026-03-11T02:24:44.924525696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.925307 containerd[1449]: time="2026-03-11T02:24:44.924630065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.925307 containerd[1449]: time="2026-03-11T02:24:44.925100286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.925519 containerd[1449]: time="2026-03-11T02:24:44.925348436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.925519 containerd[1449]: time="2026-03-11T02:24:44.925437248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.925519 containerd[1449]: time="2026-03-11T02:24:44.925451169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.925656 containerd[1449]: time="2026-03-11T02:24:44.925540843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.964266 systemd[1]: Started cri-containerd-49514e6714abf391ffd4165f823c47d53737960f5102217a4d5afd7bfcb7b061.scope - libcontainer container 49514e6714abf391ffd4165f823c47d53737960f5102217a4d5afd7bfcb7b061. Mar 11 02:24:44.970878 systemd[1]: Started cri-containerd-03ec965942b01e59d1955cad20ba212fbeb3b95dd742b4a11f97fbc0a5ce1d93.scope - libcontainer container 03ec965942b01e59d1955cad20ba212fbeb3b95dd742b4a11f97fbc0a5ce1d93. Mar 11 02:24:44.973341 systemd[1]: Started cri-containerd-71bdcc349dbcde9f007e4f78db7b8d69069ae82a13cfb37b30c87faf1171dfc8.scope - libcontainer container 71bdcc349dbcde9f007e4f78db7b8d69069ae82a13cfb37b30c87faf1171dfc8. Mar 11 02:24:45.036334 containerd[1449]: time="2026-03-11T02:24:45.036222737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"49514e6714abf391ffd4165f823c47d53737960f5102217a4d5afd7bfcb7b061\"" Mar 11 02:24:45.039544 kubelet[2125]: E0311 02:24:45.039505 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:45.053188 containerd[1449]: time="2026-03-11T02:24:45.052891777Z" level=info msg="CreateContainer within sandbox \"49514e6714abf391ffd4165f823c47d53737960f5102217a4d5afd7bfcb7b061\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 02:24:45.070521 containerd[1449]: time="2026-03-11T02:24:45.070407093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ec965942b01e59d1955cad20ba212fbeb3b95dd742b4a11f97fbc0a5ce1d93\"" Mar 11 02:24:45.071850 kubelet[2125]: E0311 02:24:45.071778 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:45.080621 containerd[1449]: time="2026-03-11T02:24:45.080415817Z" level=info msg="CreateContainer within sandbox \"03ec965942b01e59d1955cad20ba212fbeb3b95dd742b4a11f97fbc0a5ce1d93\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 02:24:45.082679 containerd[1449]: time="2026-03-11T02:24:45.082403653Z" level=info msg="CreateContainer within sandbox \"49514e6714abf391ffd4165f823c47d53737960f5102217a4d5afd7bfcb7b061\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70c72143ee014df4e05aec55760ebcc3a93d0ff27f72c733dc9bc6cd86146ff1\"" Mar 11 02:24:45.083575 containerd[1449]: time="2026-03-11T02:24:45.083403048Z" level=info msg="StartContainer for \"70c72143ee014df4e05aec55760ebcc3a93d0ff27f72c733dc9bc6cd86146ff1\"" Mar 11 02:24:45.088069 containerd[1449]: time="2026-03-11T02:24:45.087813879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:328f6c372c7979059974ab052964c8bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"71bdcc349dbcde9f007e4f78db7b8d69069ae82a13cfb37b30c87faf1171dfc8\"" Mar 11 02:24:45.089151 kubelet[2125]: E0311 02:24:45.089122 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:45.099068 containerd[1449]: time="2026-03-11T02:24:45.098944592Z" level=info msg="CreateContainer within sandbox \"71bdcc349dbcde9f007e4f78db7b8d69069ae82a13cfb37b30c87faf1171dfc8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 02:24:45.125444 kubelet[2125]: E0311 02:24:45.125188 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:24:45.136652 containerd[1449]: time="2026-03-11T02:24:45.136592220Z" level=info msg="CreateContainer within sandbox \"03ec965942b01e59d1955cad20ba212fbeb3b95dd742b4a11f97fbc0a5ce1d93\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ca5138d143a1b23e74ee35f5f4e18f45e10da2404723ead5c7dbab601462a3b\"" Mar 11 02:24:45.137586 containerd[1449]: time="2026-03-11T02:24:45.137529143Z" level=info msg="StartContainer for \"2ca5138d143a1b23e74ee35f5f4e18f45e10da2404723ead5c7dbab601462a3b\"" Mar 11 02:24:45.157140 containerd[1449]: time="2026-03-11T02:24:45.157042263Z" level=info msg="CreateContainer within sandbox \"71bdcc349dbcde9f007e4f78db7b8d69069ae82a13cfb37b30c87faf1171dfc8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc763e2b55fe2ad9225d2ff396d8ffd21decdcdfb57aed39916b6edfdb603277\"" Mar 11 02:24:45.157697 containerd[1449]: time="2026-03-11T02:24:45.157629794Z" level=info msg="StartContainer for \"cc763e2b55fe2ad9225d2ff396d8ffd21decdcdfb57aed39916b6edfdb603277\"" Mar 11 02:24:45.158283 systemd[1]: Started cri-containerd-70c72143ee014df4e05aec55760ebcc3a93d0ff27f72c733dc9bc6cd86146ff1.scope - libcontainer container 70c72143ee014df4e05aec55760ebcc3a93d0ff27f72c733dc9bc6cd86146ff1. Mar 11 02:24:45.230223 kubelet[2125]: E0311 02:24:45.228731 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Mar 11 02:24:45.237308 systemd[1]: Started cri-containerd-2ca5138d143a1b23e74ee35f5f4e18f45e10da2404723ead5c7dbab601462a3b.scope - libcontainer container 2ca5138d143a1b23e74ee35f5f4e18f45e10da2404723ead5c7dbab601462a3b. Mar 11 02:24:45.282341 systemd[1]: Started cri-containerd-cc763e2b55fe2ad9225d2ff396d8ffd21decdcdfb57aed39916b6edfdb603277.scope - libcontainer container cc763e2b55fe2ad9225d2ff396d8ffd21decdcdfb57aed39916b6edfdb603277. Mar 11 02:24:45.296009 containerd[1449]: time="2026-03-11T02:24:45.295877445Z" level=info msg="StartContainer for \"70c72143ee014df4e05aec55760ebcc3a93d0ff27f72c733dc9bc6cd86146ff1\" returns successfully" Mar 11 02:24:45.319219 containerd[1449]: time="2026-03-11T02:24:45.319067792Z" level=info msg="StartContainer for \"2ca5138d143a1b23e74ee35f5f4e18f45e10da2404723ead5c7dbab601462a3b\" returns successfully" Mar 11 02:24:45.353068 containerd[1449]: time="2026-03-11T02:24:45.352905226Z" level=info msg="StartContainer for \"cc763e2b55fe2ad9225d2ff396d8ffd21decdcdfb57aed39916b6edfdb603277\" returns successfully" Mar 11 02:24:45.425066 kubelet[2125]: I0311 02:24:45.422595 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:45.425066 kubelet[2125]: E0311 02:24:45.423149 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Mar 11 02:24:46.009372 kubelet[2125]: E0311 02:24:46.008720 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:46.012477 kubelet[2125]: E0311 02:24:46.012067 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:46.012602 kubelet[2125]: E0311 02:24:46.012563 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:46.012839 kubelet[2125]: E0311 02:24:46.012779 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:46.015923 kubelet[2125]: E0311 02:24:46.015903 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:46.016603 kubelet[2125]: E0311 02:24:46.016323 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:47.026150 kubelet[2125]: I0311 02:24:47.026047 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:47.027061 kubelet[2125]: E0311 02:24:47.026092 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:47.027061 kubelet[2125]: E0311 02:24:47.026577 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:47.027633 kubelet[2125]: E0311 02:24:47.027556 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:47.027894 kubelet[2125]: E0311 02:24:47.027783 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:48.023680 kubelet[2125]: E0311 02:24:48.023492 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:48.023855 kubelet[2125]: E0311 02:24:48.023694 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:50.940510 kubelet[2125]: E0311 02:24:50.940363 2125 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 11 02:24:51.142501 kubelet[2125]: E0311 02:24:51.141928 2125 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189ba842f37279d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:24:43.80609583 +0000 UTC m=+0.558350060,LastTimestamp:2026-03-11 02:24:43.80609583 +0000 UTC m=+0.558350060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:24:51.149266 kubelet[2125]: I0311 02:24:51.148716 2125 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:24:51.149266 kubelet[2125]: E0311 02:24:51.148753 2125 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 11 02:24:51.222561 kubelet[2125]: I0311 02:24:51.222127 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:51.233338 kubelet[2125]: E0311 02:24:51.233187 2125 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:51.233338 kubelet[2125]: I0311 02:24:51.233240 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:51.235712 kubelet[2125]: E0311 02:24:51.235515 2125 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:51.235712 kubelet[2125]: I0311 02:24:51.235586 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:51.237908 kubelet[2125]: E0311 02:24:51.237867 2125 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:51.802823 kubelet[2125]: I0311 02:24:51.802713 2125 apiserver.go:52] "Watching apiserver" Mar 11 02:24:51.817291 kubelet[2125]: I0311 02:24:51.817046 2125 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:24:52.197125 kubelet[2125]: I0311 02:24:52.196662 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:52.214331 kubelet[2125]: E0311 02:24:52.214242 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:53.066347 kubelet[2125]: E0311 02:24:53.065189 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:53.956460 kubelet[2125]: I0311 02:24:53.956299 2125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9562489090000001 podStartE2EDuration="1.956248909s" podCreationTimestamp="2026-03-11 02:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:53.953482848 +0000 UTC m=+10.705737077" watchObservedRunningTime="2026-03-11 02:24:53.956248909 +0000 UTC m=+10.708503148" Mar 11 02:24:54.200425 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-7.scope)... Mar 11 02:24:54.200466 systemd[1]: Reloading... Mar 11 02:24:54.325062 zram_generator::config[2459]: No configuration found. Mar 11 02:24:54.475148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:24:54.623131 systemd[1]: Reloading finished in 421 ms. Mar 11 02:24:54.693528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:54.721255 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 02:24:54.721674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:54.721779 systemd[1]: kubelet.service: Consumed 2.048s CPU time, 130.8M memory peak, 0B memory swap peak. Mar 11 02:24:54.735688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:54.940756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:54.949332 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:24:55.226193 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:24:55.226193 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:24:55.226193 kubelet[2504]: I0311 02:24:55.225712 2504 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:24:55.250698 kubelet[2504]: I0311 02:24:55.247872 2504 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 02:24:55.250698 kubelet[2504]: I0311 02:24:55.248103 2504 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:24:55.250698 kubelet[2504]: I0311 02:24:55.248394 2504 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:24:55.250698 kubelet[2504]: I0311 02:24:55.248583 2504 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:24:55.250698 kubelet[2504]: I0311 02:24:55.249776 2504 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:24:55.250698 kubelet[2504]: I0311 02:24:55.256764 2504 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 02:24:55.364902 kubelet[2504]: I0311 02:24:55.287181 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:24:55.364902 kubelet[2504]: E0311 02:24:55.355293 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:24:55.364902 kubelet[2504]: I0311 02:24:55.356785 2504 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:24:55.389721 kubelet[2504]: I0311 02:24:55.385936 2504 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:24:55.389721 kubelet[2504]: I0311 02:24:55.386518 2504 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:24:55.389721 kubelet[2504]: I0311 02:24:55.386613 2504 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:24:55.389721 kubelet[2504]: I0311 02:24:55.387118 2504 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.387132 2504 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.387212 2504 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.388086 2504 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.388528 2504 kubelet.go:475] "Attempting to sync node with API server" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.389249 2504 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.390096 2504 kubelet.go:387] "Adding apiserver pod source" Mar 11 02:24:55.394162 kubelet[2504]: I0311 02:24:55.390117 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:24:55.398730 kubelet[2504]: I0311 02:24:55.398203 2504 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:24:55.405828 kubelet[2504]: I0311 02:24:55.405645 2504 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:24:55.405828 kubelet[2504]: I0311 02:24:55.405719 2504 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:24:55.449127 kubelet[2504]: I0311 02:24:55.444863 2504 server.go:1262] "Started kubelet" Mar 11 02:24:55.449127 kubelet[2504]: I0311 02:24:55.447917 2504 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:24:55.449127 kubelet[2504]: I0311 02:24:55.448059 2504 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:24:55.449127 kubelet[2504]: I0311 02:24:55.448411 2504 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:24:55.449127 kubelet[2504]: I0311 02:24:55.448494 2504 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:24:55.452726 kubelet[2504]: I0311 02:24:55.450821 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:24:55.460366 kubelet[2504]: E0311 02:24:55.460244 2504 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:24:55.460366 kubelet[2504]: I0311 02:24:55.452227 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:24:55.461154 kubelet[2504]: I0311 02:24:55.460910 2504 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 02:24:55.461154 kubelet[2504]: I0311 02:24:55.451230 2504 server.go:310] "Adding debug handlers to kubelet server" Mar 11 02:24:55.463270 kubelet[2504]: I0311 02:24:55.463194 2504 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:24:55.467733 kubelet[2504]: I0311 02:24:55.464342 2504 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:24:55.467733 kubelet[2504]: I0311 02:24:55.464489 2504 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:24:55.467733 kubelet[2504]: I0311 02:24:55.465397 2504 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:24:55.470297 kubelet[2504]: I0311 02:24:55.470205 2504 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:24:55.538065 kubelet[2504]: I0311 02:24:55.537933 2504 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:24:55.547857 kubelet[2504]: I0311 02:24:55.547829 2504 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:24:55.548340 kubelet[2504]: I0311 02:24:55.548234 2504 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 02:24:55.548888 kubelet[2504]: I0311 02:24:55.548873 2504 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 02:24:55.550237 kubelet[2504]: E0311 02:24:55.550203 2504 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:24:55.650569 kubelet[2504]: E0311 02:24:55.650377 2504 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673133 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673162 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673187 2504 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673367 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673383 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673406 2504 policy_none.go:49] "None policy: Start" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673420 2504 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673434 2504 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673609 2504 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 02:24:55.673920 kubelet[2504]: I0311 02:24:55.673624 2504 policy_none.go:47] "Start" Mar 11 02:24:55.688662 kubelet[2504]: E0311 02:24:55.688509 2504 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:24:55.689074 kubelet[2504]: I0311 02:24:55.688933 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:24:55.689336 kubelet[2504]: I0311 02:24:55.689070 2504 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:24:55.690186 kubelet[2504]: I0311 02:24:55.690053 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:24:55.694948 kubelet[2504]: E0311 02:24:55.694923 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:24:55.853407 kubelet[2504]: I0311 02:24:55.853125 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:55.853407 kubelet[2504]: I0311 02:24:55.853257 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:55.855397 kubelet[2504]: I0311 02:24:55.855254 2504 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.869126 kubelet[2504]: I0311 02:24:55.868074 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/328f6c372c7979059974ab052964c8bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"328f6c372c7979059974ab052964c8bc\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:55.869126 kubelet[2504]: I0311 02:24:55.868260 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/328f6c372c7979059974ab052964c8bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"328f6c372c7979059974ab052964c8bc\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:55.869126 kubelet[2504]: I0311 02:24:55.868340 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/328f6c372c7979059974ab052964c8bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"328f6c372c7979059974ab052964c8bc\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:55.869126 kubelet[2504]: I0311 02:24:55.868370 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.869126 kubelet[2504]: I0311 02:24:55.868397 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.869395 kubelet[2504]: I0311 02:24:55.868419 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:55.869395 kubelet[2504]: I0311 02:24:55.868444 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.869395 kubelet[2504]: I0311 02:24:55.868466 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.869395 kubelet[2504]: I0311 02:24:55.868489 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.870463 kubelet[2504]: E0311 02:24:55.870238 2504 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:55.931183 kubelet[2504]: I0311 02:24:55.930913 2504 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:55.948728 kubelet[2504]: I0311 02:24:55.947756 2504 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 11 02:24:55.948728 kubelet[2504]: I0311 02:24:55.947859 2504 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:24:56.178361 kubelet[2504]: E0311 02:24:56.166376 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:56.178361 kubelet[2504]: E0311 02:24:56.168470 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:56.178361 kubelet[2504]: E0311 02:24:56.171928 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:56.393305 kubelet[2504]: I0311 02:24:56.393099 2504 apiserver.go:52] "Watching apiserver" Mar 11 02:24:56.464837 kubelet[2504]: I0311 02:24:56.464342 2504 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:24:56.599083 kubelet[2504]: E0311 02:24:56.597666 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:56.601608 kubelet[2504]: E0311 02:24:56.598092 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:56.602165 kubelet[2504]: E0311 02:24:56.602146 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:56.929089 kubelet[2504]: I0311 02:24:56.928099 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.928072644 podStartE2EDuration="1.928072644s" podCreationTimestamp="2026-03-11 02:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:56.692429726 +0000 UTC m=+1.562038471" watchObservedRunningTime="2026-03-11 02:24:56.928072644 +0000 UTC m=+1.797681388" Mar 11 02:24:56.929089 kubelet[2504]: I0311 02:24:56.928241 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9282311810000001 podStartE2EDuration="1.928231181s" podCreationTimestamp="2026-03-11 02:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:56.925312299 +0000 UTC m=+1.794921084" watchObservedRunningTime="2026-03-11 02:24:56.928231181 +0000 UTC m=+1.797839926" Mar 11 02:24:57.606243 kubelet[2504]: E0311 02:24:57.606192 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:57.610748 kubelet[2504]: E0311 02:24:57.608646 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:58.790852 kubelet[2504]: E0311 02:24:58.790742 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:58.873053 kubelet[2504]: E0311 02:24:58.872913 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:59.627842 kubelet[2504]: E0311 02:24:59.627756 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:59.628147 kubelet[2504]: E0311 02:24:59.627756 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:00.157144 kubelet[2504]: I0311 02:25:00.157042 2504 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 02:25:00.157886 containerd[1449]: time="2026-03-11T02:25:00.157541310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 02:25:00.158436 kubelet[2504]: I0311 02:25:00.157910 2504 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 02:25:00.632198 kubelet[2504]: E0311 02:25:00.631526 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:01.345486 systemd[1]: Created slice kubepods-besteffort-pod832b04ff_3017_4b80_b753_5981b435b195.slice - libcontainer container kubepods-besteffort-pod832b04ff_3017_4b80_b753_5981b435b195.slice. Mar 11 02:25:01.450465 systemd[1]: Created slice kubepods-besteffort-pod4e64e664_f6cd_4320_af4a_db528faafe49.slice - libcontainer container kubepods-besteffort-pod4e64e664_f6cd_4320_af4a_db528faafe49.slice. Mar 11 02:25:01.481920 kubelet[2504]: I0311 02:25:01.481687 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlgf\" (UniqueName: \"kubernetes.io/projected/832b04ff-3017-4b80-b753-5981b435b195-kube-api-access-jvlgf\") pod \"kube-proxy-p444m\" (UID: \"832b04ff-3017-4b80-b753-5981b435b195\") " pod="kube-system/kube-proxy-p444m" Mar 11 02:25:01.481920 kubelet[2504]: I0311 02:25:01.481868 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/832b04ff-3017-4b80-b753-5981b435b195-kube-proxy\") pod \"kube-proxy-p444m\" (UID: \"832b04ff-3017-4b80-b753-5981b435b195\") " pod="kube-system/kube-proxy-p444m" Mar 11 02:25:01.482698 kubelet[2504]: I0311 02:25:01.482013 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/832b04ff-3017-4b80-b753-5981b435b195-xtables-lock\") pod \"kube-proxy-p444m\" (UID: \"832b04ff-3017-4b80-b753-5981b435b195\") " pod="kube-system/kube-proxy-p444m" Mar 11 02:25:01.482698 kubelet[2504]: I0311 02:25:01.482032 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/832b04ff-3017-4b80-b753-5981b435b195-lib-modules\") pod \"kube-proxy-p444m\" (UID: \"832b04ff-3017-4b80-b753-5981b435b195\") " pod="kube-system/kube-proxy-p444m" Mar 11 02:25:01.582344 kubelet[2504]: I0311 02:25:01.582240 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e64e664-f6cd-4320-af4a-db528faafe49-var-lib-calico\") pod \"tigera-operator-5588576f44-vzpd5\" (UID: \"4e64e664-f6cd-4320-af4a-db528faafe49\") " pod="tigera-operator/tigera-operator-5588576f44-vzpd5" Mar 11 02:25:01.582344 kubelet[2504]: I0311 02:25:01.582332 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhw7k\" (UniqueName: \"kubernetes.io/projected/4e64e664-f6cd-4320-af4a-db528faafe49-kube-api-access-bhw7k\") pod \"tigera-operator-5588576f44-vzpd5\" (UID: \"4e64e664-f6cd-4320-af4a-db528faafe49\") " pod="tigera-operator/tigera-operator-5588576f44-vzpd5" Mar 11 02:25:01.661268 kubelet[2504]: E0311 02:25:01.661113 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:01.662186 containerd[1449]: time="2026-03-11T02:25:01.662111436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p444m,Uid:832b04ff-3017-4b80-b753-5981b435b195,Namespace:kube-system,Attempt:0,}" Mar 11 02:25:01.760161 containerd[1449]: time="2026-03-11T02:25:01.759643552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vzpd5,Uid:4e64e664-f6cd-4320-af4a-db528faafe49,Namespace:tigera-operator,Attempt:0,}" Mar 11 02:25:01.767240 containerd[1449]: time="2026-03-11T02:25:01.766858190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:01.767240 containerd[1449]: time="2026-03-11T02:25:01.766932693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:01.767240 containerd[1449]: time="2026-03-11T02:25:01.767016378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:01.770215 containerd[1449]: time="2026-03-11T02:25:01.770083768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:01.852542 systemd[1]: Started cri-containerd-a4b56e441aa24501bda9a04fb6665641fdae9fa9e80dd035abef70d56257a815.scope - libcontainer container a4b56e441aa24501bda9a04fb6665641fdae9fa9e80dd035abef70d56257a815. Mar 11 02:25:01.878658 containerd[1449]: time="2026-03-11T02:25:01.877782910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:01.878658 containerd[1449]: time="2026-03-11T02:25:01.877876570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:01.878658 containerd[1449]: time="2026-03-11T02:25:01.877898172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:01.878658 containerd[1449]: time="2026-03-11T02:25:01.878325820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:02.038378 containerd[1449]: time="2026-03-11T02:25:02.038280467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p444m,Uid:832b04ff-3017-4b80-b753-5981b435b195,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4b56e441aa24501bda9a04fb6665641fdae9fa9e80dd035abef70d56257a815\"" Mar 11 02:25:02.041099 kubelet[2504]: E0311 02:25:02.040252 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:02.056837 containerd[1449]: time="2026-03-11T02:25:02.056777107Z" level=info msg="CreateContainer within sandbox \"a4b56e441aa24501bda9a04fb6665641fdae9fa9e80dd035abef70d56257a815\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 02:25:02.058448 systemd[1]: Started cri-containerd-80251895ddd804734f5a0c2c78896b0af78d2ed61ff6cf74457f5922e490300c.scope - libcontainer container 80251895ddd804734f5a0c2c78896b0af78d2ed61ff6cf74457f5922e490300c. Mar 11 02:25:02.077360 containerd[1449]: time="2026-03-11T02:25:02.077254435Z" level=info msg="CreateContainer within sandbox \"a4b56e441aa24501bda9a04fb6665641fdae9fa9e80dd035abef70d56257a815\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5315e7af270443d6cfaea0a6606d8bdb7923dffd82ca25e0b1cf60676db7ff87\"" Mar 11 02:25:02.078496 containerd[1449]: time="2026-03-11T02:25:02.078454095Z" level=info msg="StartContainer for \"5315e7af270443d6cfaea0a6606d8bdb7923dffd82ca25e0b1cf60676db7ff87\"" Mar 11 02:25:02.132814 containerd[1449]: time="2026-03-11T02:25:02.131050002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vzpd5,Uid:4e64e664-f6cd-4320-af4a-db528faafe49,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"80251895ddd804734f5a0c2c78896b0af78d2ed61ff6cf74457f5922e490300c\"" Mar 11 02:25:02.136160 systemd[1]: Started cri-containerd-5315e7af270443d6cfaea0a6606d8bdb7923dffd82ca25e0b1cf60676db7ff87.scope - libcontainer container 5315e7af270443d6cfaea0a6606d8bdb7923dffd82ca25e0b1cf60676db7ff87. Mar 11 02:25:02.137421 containerd[1449]: time="2026-03-11T02:25:02.137349992Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 11 02:25:02.180260 containerd[1449]: time="2026-03-11T02:25:02.180110645Z" level=info msg="StartContainer for \"5315e7af270443d6cfaea0a6606d8bdb7923dffd82ca25e0b1cf60676db7ff87\" returns successfully" Mar 11 02:25:02.638930 kubelet[2504]: E0311 02:25:02.637937 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:02.653876 kubelet[2504]: I0311 02:25:02.653770 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p444m" podStartSLOduration=1.653747971 podStartE2EDuration="1.653747971s" podCreationTimestamp="2026-03-11 02:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:25:02.65363387 +0000 UTC m=+7.523242625" watchObservedRunningTime="2026-03-11 02:25:02.653747971 +0000 UTC m=+7.523356736" Mar 11 02:25:02.966599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144812382.mount: Deactivated successfully. Mar 11 02:25:03.606817 kubelet[2504]: E0311 02:25:03.606687 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:03.642720 kubelet[2504]: E0311 02:25:03.642657 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:04.646641 kubelet[2504]: E0311 02:25:04.646606 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:05.130821 containerd[1449]: time="2026-03-11T02:25:05.129929213Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:05.131335 containerd[1449]: time="2026-03-11T02:25:05.131270202Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 11 02:25:05.132913 containerd[1449]: time="2026-03-11T02:25:05.132861103Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:05.135694 containerd[1449]: time="2026-03-11T02:25:05.135644337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:05.136809 containerd[1449]: time="2026-03-11T02:25:05.136714350Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.9992782s" Mar 11 02:25:05.136871 containerd[1449]: time="2026-03-11T02:25:05.136841937Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 11 02:25:05.143689 containerd[1449]: time="2026-03-11T02:25:05.143609897Z" level=info msg="CreateContainer within sandbox \"80251895ddd804734f5a0c2c78896b0af78d2ed61ff6cf74457f5922e490300c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 11 02:25:05.161681 containerd[1449]: time="2026-03-11T02:25:05.161526193Z" level=info msg="CreateContainer within sandbox \"80251895ddd804734f5a0c2c78896b0af78d2ed61ff6cf74457f5922e490300c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"110f7e6c54f88fb042bf4ce5a878c9824ba064a77d0330b737cd158b1c0de7c4\"" Mar 11 02:25:05.162774 containerd[1449]: time="2026-03-11T02:25:05.162639120Z" level=info msg="StartContainer for \"110f7e6c54f88fb042bf4ce5a878c9824ba064a77d0330b737cd158b1c0de7c4\"" Mar 11 02:25:05.219264 systemd[1]: Started cri-containerd-110f7e6c54f88fb042bf4ce5a878c9824ba064a77d0330b737cd158b1c0de7c4.scope - libcontainer container 110f7e6c54f88fb042bf4ce5a878c9824ba064a77d0330b737cd158b1c0de7c4. Mar 11 02:25:05.261822 containerd[1449]: time="2026-03-11T02:25:05.261763495Z" level=info msg="StartContainer for \"110f7e6c54f88fb042bf4ce5a878c9824ba064a77d0330b737cd158b1c0de7c4\" returns successfully" Mar 11 02:25:08.393163 update_engine[1435]: I20260311 02:25:08.393070 1435 update_attempter.cc:509] Updating boot flags... Mar 11 02:25:08.463156 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2896) Mar 11 02:25:08.532045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2895) Mar 11 02:25:11.174147 sudo[1630]: pam_unix(sudo:session): session closed for user root Mar 11 02:25:11.177083 sshd[1627]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:11.183274 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:50734.service: Deactivated successfully. Mar 11 02:25:11.185502 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 02:25:11.185751 systemd[1]: session-7.scope: Consumed 9.019s CPU time, 161.8M memory peak, 0B memory swap peak. Mar 11 02:25:11.188134 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Mar 11 02:25:11.193935 systemd-logind[1434]: Removed session 7. Mar 11 02:25:13.453515 kubelet[2504]: I0311 02:25:13.453391 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-vzpd5" podStartSLOduration=9.45099255 podStartE2EDuration="12.453366432s" podCreationTimestamp="2026-03-11 02:25:01 +0000 UTC" firstStartedPulling="2026-03-11 02:25:02.135909442 +0000 UTC m=+7.005518187" lastFinishedPulling="2026-03-11 02:25:05.138283323 +0000 UTC m=+10.007892069" observedRunningTime="2026-03-11 02:25:05.660437263 +0000 UTC m=+10.530046018" watchObservedRunningTime="2026-03-11 02:25:13.453366432 +0000 UTC m=+18.322975196" Mar 11 02:25:13.548514 systemd[1]: Created slice kubepods-besteffort-poddd17cf68_b218_4c13_9153_719611ce9b20.slice - libcontainer container kubepods-besteffort-poddd17cf68_b218_4c13_9153_719611ce9b20.slice. Mar 11 02:25:13.560882 systemd[1]: Created slice kubepods-besteffort-podc3865e17_e839_42b1_b29c_abe79d35edb1.slice - libcontainer container kubepods-besteffort-podc3865e17_e839_42b1_b29c_abe79d35edb1.slice. Mar 11 02:25:13.649683 kubelet[2504]: E0311 02:25:13.649529 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:13.724614 kubelet[2504]: I0311 02:25:13.723623 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-cni-log-dir\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724614 kubelet[2504]: I0311 02:25:13.723658 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-var-run-calico\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724614 kubelet[2504]: I0311 02:25:13.723674 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-lib-modules\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724614 kubelet[2504]: I0311 02:25:13.723689 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-policysync\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724614 kubelet[2504]: I0311 02:25:13.723701 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-var-lib-calico\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724833 kubelet[2504]: I0311 02:25:13.723713 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-sys-fs\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724833 kubelet[2504]: I0311 02:25:13.723728 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-xtables-lock\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724833 kubelet[2504]: I0311 02:25:13.723741 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-cni-net-dir\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724833 kubelet[2504]: I0311 02:25:13.723756 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlnb\" (UniqueName: \"kubernetes.io/projected/dd17cf68-b218-4c13-9153-719611ce9b20-kube-api-access-zqlnb\") pod \"calico-typha-b96c8dfc8-tfrlb\" (UID: \"dd17cf68-b218-4c13-9153-719611ce9b20\") " pod="calico-system/calico-typha-b96c8dfc8-tfrlb" Mar 11 02:25:13.724833 kubelet[2504]: I0311 02:25:13.723782 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c3865e17-e839-42b1-b29c-abe79d35edb1-node-certs\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724942 kubelet[2504]: I0311 02:25:13.723803 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3865e17-e839-42b1-b29c-abe79d35edb1-tigera-ca-bundle\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724942 kubelet[2504]: I0311 02:25:13.723823 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd17cf68-b218-4c13-9153-719611ce9b20-typha-certs\") pod \"calico-typha-b96c8dfc8-tfrlb\" (UID: \"dd17cf68-b218-4c13-9153-719611ce9b20\") " pod="calico-system/calico-typha-b96c8dfc8-tfrlb" Mar 11 02:25:13.724942 kubelet[2504]: I0311 02:25:13.723839 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-flexvol-driver-host\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724942 kubelet[2504]: I0311 02:25:13.723854 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdvp\" (UniqueName: \"kubernetes.io/projected/c3865e17-e839-42b1-b29c-abe79d35edb1-kube-api-access-njdvp\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.724942 kubelet[2504]: I0311 02:25:13.723867 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd17cf68-b218-4c13-9153-719611ce9b20-tigera-ca-bundle\") pod \"calico-typha-b96c8dfc8-tfrlb\" (UID: \"dd17cf68-b218-4c13-9153-719611ce9b20\") " pod="calico-system/calico-typha-b96c8dfc8-tfrlb" Mar 11 02:25:13.725141 kubelet[2504]: I0311 02:25:13.723879 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-cni-bin-dir\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.725141 kubelet[2504]: I0311 02:25:13.723892 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-nodeproc\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.725141 kubelet[2504]: I0311 02:25:13.723904 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c3865e17-e839-42b1-b29c-abe79d35edb1-bpffs\") pod \"calico-node-clhc6\" (UID: \"c3865e17-e839-42b1-b29c-abe79d35edb1\") " pod="calico-system/calico-node-clhc6" Mar 11 02:25:13.825631 kubelet[2504]: I0311 02:25:13.825520 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adc87660-fa32-4458-aba8-d62f16053a90-kubelet-dir\") pod \"csi-node-driver-wtltt\" (UID: \"adc87660-fa32-4458-aba8-d62f16053a90\") " pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:13.825631 kubelet[2504]: I0311 02:25:13.825589 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/adc87660-fa32-4458-aba8-d62f16053a90-varrun\") pod \"csi-node-driver-wtltt\" (UID: \"adc87660-fa32-4458-aba8-d62f16053a90\") " pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:13.825631 kubelet[2504]: I0311 02:25:13.825614 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplgg\" (UniqueName: \"kubernetes.io/projected/adc87660-fa32-4458-aba8-d62f16053a90-kube-api-access-gplgg\") pod \"csi-node-driver-wtltt\" (UID: \"adc87660-fa32-4458-aba8-d62f16053a90\") " pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:13.825631 kubelet[2504]: I0311 02:25:13.825638 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/adc87660-fa32-4458-aba8-d62f16053a90-registration-dir\") pod \"csi-node-driver-wtltt\" (UID: \"adc87660-fa32-4458-aba8-d62f16053a90\") " pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:13.826096 kubelet[2504]: I0311 02:25:13.825909 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/adc87660-fa32-4458-aba8-d62f16053a90-socket-dir\") pod \"csi-node-driver-wtltt\" (UID: \"adc87660-fa32-4458-aba8-d62f16053a90\") " pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:13.831607 kubelet[2504]: E0311 02:25:13.830302 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.831607 kubelet[2504]: W0311 02:25:13.830331 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.831607 kubelet[2504]: E0311 02:25:13.830471 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.832893 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.837299 kubelet[2504]: W0311 02:25:13.832915 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.832935 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.833932 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.837299 kubelet[2504]: W0311 02:25:13.833945 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.834082 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.835038 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.837299 kubelet[2504]: W0311 02:25:13.835051 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.835174 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.837299 kubelet[2504]: E0311 02:25:13.836084 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.837685 kubelet[2504]: W0311 02:25:13.836096 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.837685 kubelet[2504]: E0311 02:25:13.836217 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.845118 kubelet[2504]: E0311 02:25:13.839153 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.845118 kubelet[2504]: W0311 02:25:13.839176 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.845118 kubelet[2504]: E0311 02:25:13.839191 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.845118 kubelet[2504]: E0311 02:25:13.840570 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.845118 kubelet[2504]: W0311 02:25:13.840591 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.845118 kubelet[2504]: E0311 02:25:13.840612 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.845118 kubelet[2504]: E0311 02:25:13.843354 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.845118 kubelet[2504]: W0311 02:25:13.843370 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.845118 kubelet[2504]: E0311 02:25:13.843391 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.855509 kubelet[2504]: E0311 02:25:13.855469 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.856480 kubelet[2504]: W0311 02:25:13.855899 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.856480 kubelet[2504]: E0311 02:25:13.855940 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.860218 kubelet[2504]: E0311 02:25:13.860069 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.860218 kubelet[2504]: W0311 02:25:13.860141 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.860218 kubelet[2504]: E0311 02:25:13.860169 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.861042 kubelet[2504]: E0311 02:25:13.860793 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.861042 kubelet[2504]: W0311 02:25:13.860812 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.861042 kubelet[2504]: E0311 02:25:13.860829 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.867894 kubelet[2504]: E0311 02:25:13.866490 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.867894 kubelet[2504]: W0311 02:25:13.866509 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.867894 kubelet[2504]: E0311 02:25:13.866527 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.881145 containerd[1449]: time="2026-03-11T02:25:13.880529736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-clhc6,Uid:c3865e17-e839-42b1-b29c-abe79d35edb1,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:13.929069 kubelet[2504]: E0311 02:25:13.928819 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.929069 kubelet[2504]: W0311 02:25:13.928853 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.929069 kubelet[2504]: E0311 02:25:13.928881 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.931028 kubelet[2504]: E0311 02:25:13.929879 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.931028 kubelet[2504]: W0311 02:25:13.929902 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.931028 kubelet[2504]: E0311 02:25:13.929922 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.931028 kubelet[2504]: E0311 02:25:13.930799 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.931028 kubelet[2504]: W0311 02:25:13.930815 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.931028 kubelet[2504]: E0311 02:25:13.930924 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.932591 kubelet[2504]: E0311 02:25:13.932486 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.932591 kubelet[2504]: W0311 02:25:13.932541 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.932591 kubelet[2504]: E0311 02:25:13.932560 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.933887 kubelet[2504]: E0311 02:25:13.933817 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.933887 kubelet[2504]: W0311 02:25:13.933868 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.933887 kubelet[2504]: E0311 02:25:13.933883 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.935473 kubelet[2504]: E0311 02:25:13.934609 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.935473 kubelet[2504]: W0311 02:25:13.934664 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.935473 kubelet[2504]: E0311 02:25:13.934679 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.937463 kubelet[2504]: E0311 02:25:13.937342 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.937463 kubelet[2504]: W0311 02:25:13.937359 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.937463 kubelet[2504]: E0311 02:25:13.937373 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.940182 kubelet[2504]: E0311 02:25:13.940111 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.940182 kubelet[2504]: W0311 02:25:13.940165 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.940182 kubelet[2504]: E0311 02:25:13.940182 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.942115 kubelet[2504]: E0311 02:25:13.941937 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.942115 kubelet[2504]: W0311 02:25:13.942084 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.942115 kubelet[2504]: E0311 02:25:13.942102 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.943489 kubelet[2504]: E0311 02:25:13.943241 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.943489 kubelet[2504]: W0311 02:25:13.943316 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.943489 kubelet[2504]: E0311 02:25:13.943332 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.946077 kubelet[2504]: E0311 02:25:13.945803 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.946077 kubelet[2504]: W0311 02:25:13.945820 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.946077 kubelet[2504]: E0311 02:25:13.945833 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.947130 kubelet[2504]: E0311 02:25:13.946883 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.947130 kubelet[2504]: W0311 02:25:13.947110 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.947130 kubelet[2504]: E0311 02:25:13.947133 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.947767 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.950036 kubelet[2504]: W0311 02:25:13.947818 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.947835 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.948411 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.950036 kubelet[2504]: W0311 02:25:13.948424 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.948437 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.948866 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.950036 kubelet[2504]: W0311 02:25:13.948878 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.948891 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.950036 kubelet[2504]: E0311 02:25:13.949453 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.950710 kubelet[2504]: W0311 02:25:13.949465 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.950710 kubelet[2504]: E0311 02:25:13.949482 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.950710 kubelet[2504]: E0311 02:25:13.950054 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.950710 kubelet[2504]: W0311 02:25:13.950066 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.950710 kubelet[2504]: E0311 02:25:13.950082 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.950710 kubelet[2504]: E0311 02:25:13.950710 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.950923 kubelet[2504]: W0311 02:25:13.950721 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.950923 kubelet[2504]: E0311 02:25:13.950734 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.951365 kubelet[2504]: E0311 02:25:13.951207 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.951365 kubelet[2504]: W0311 02:25:13.951308 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.951365 kubelet[2504]: E0311 02:25:13.951344 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.952313 containerd[1449]: time="2026-03-11T02:25:13.951515573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:13.952313 containerd[1449]: time="2026-03-11T02:25:13.951614941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:13.952313 containerd[1449]: time="2026-03-11T02:25:13.951636779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:13.952421 kubelet[2504]: E0311 02:25:13.951741 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.952421 kubelet[2504]: W0311 02:25:13.951753 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.952421 kubelet[2504]: E0311 02:25:13.951766 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.952544 containerd[1449]: time="2026-03-11T02:25:13.951796538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:13.952588 kubelet[2504]: E0311 02:25:13.952478 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.952588 kubelet[2504]: W0311 02:25:13.952491 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.952588 kubelet[2504]: E0311 02:25:13.952505 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.954323 kubelet[2504]: E0311 02:25:13.953744 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.954323 kubelet[2504]: W0311 02:25:13.953762 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.954323 kubelet[2504]: E0311 02:25:13.953775 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.955076 kubelet[2504]: E0311 02:25:13.954788 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.955076 kubelet[2504]: W0311 02:25:13.954802 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.955076 kubelet[2504]: E0311 02:25:13.954816 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.955938 kubelet[2504]: E0311 02:25:13.955861 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.955938 kubelet[2504]: W0311 02:25:13.955919 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.955938 kubelet[2504]: E0311 02:25:13.955934 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.959184 kubelet[2504]: E0311 02:25:13.958681 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.959184 kubelet[2504]: W0311 02:25:13.958698 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.959184 kubelet[2504]: E0311 02:25:13.958713 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.984312 kubelet[2504]: E0311 02:25:13.984090 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:25:13.984312 kubelet[2504]: W0311 02:25:13.984116 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:25:13.984312 kubelet[2504]: E0311 02:25:13.984141 2504 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:25:13.999257 systemd[1]: Started cri-containerd-ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e.scope - libcontainer container ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e. Mar 11 02:25:14.045917 containerd[1449]: time="2026-03-11T02:25:14.045761568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-clhc6,Uid:c3865e17-e839-42b1-b29c-abe79d35edb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\"" Mar 11 02:25:14.056276 containerd[1449]: time="2026-03-11T02:25:14.056067045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 11 02:25:14.160693 kubelet[2504]: E0311 02:25:14.160505 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:14.161492 containerd[1449]: time="2026-03-11T02:25:14.161414676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b96c8dfc8-tfrlb,Uid:dd17cf68-b218-4c13-9153-719611ce9b20,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:14.210237 containerd[1449]: time="2026-03-11T02:25:14.209480093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:14.210237 containerd[1449]: time="2026-03-11T02:25:14.209570851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:14.210237 containerd[1449]: time="2026-03-11T02:25:14.209700080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:14.210237 containerd[1449]: time="2026-03-11T02:25:14.210107827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:14.252351 systemd[1]: Started cri-containerd-10a347ad3d2a59c6a1af00b88ec9e59d37cbf0d8168105d7bfc5f14fcdd2aa6a.scope - libcontainer container 10a347ad3d2a59c6a1af00b88ec9e59d37cbf0d8168105d7bfc5f14fcdd2aa6a. Mar 11 02:25:14.328326 containerd[1449]: time="2026-03-11T02:25:14.327939982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b96c8dfc8-tfrlb,Uid:dd17cf68-b218-4c13-9153-719611ce9b20,Namespace:calico-system,Attempt:0,} returns sandbox id \"10a347ad3d2a59c6a1af00b88ec9e59d37cbf0d8168105d7bfc5f14fcdd2aa6a\"" Mar 11 02:25:14.335372 kubelet[2504]: E0311 02:25:14.335173 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:15.353500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724525601.mount: Deactivated successfully. Mar 11 02:25:15.456666 containerd[1449]: time="2026-03-11T02:25:15.456539442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:15.458060 containerd[1449]: time="2026-03-11T02:25:15.457887108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 11 02:25:15.459462 containerd[1449]: time="2026-03-11T02:25:15.459373799Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:15.462108 containerd[1449]: time="2026-03-11T02:25:15.462060540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:15.463070 containerd[1449]: time="2026-03-11T02:25:15.462908434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.406778864s" Mar 11 02:25:15.463070 containerd[1449]: time="2026-03-11T02:25:15.463042503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 11 02:25:15.464782 containerd[1449]: time="2026-03-11T02:25:15.464758659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 11 02:25:15.469911 containerd[1449]: time="2026-03-11T02:25:15.469853308Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 02:25:15.491347 containerd[1449]: time="2026-03-11T02:25:15.491192963Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d\"" Mar 11 02:25:15.492144 containerd[1449]: time="2026-03-11T02:25:15.492082213Z" level=info msg="StartContainer for \"e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d\"" Mar 11 02:25:15.551244 kubelet[2504]: E0311 02:25:15.551069 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:15.558066 systemd[1]: Started cri-containerd-e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d.scope - libcontainer container e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d. Mar 11 02:25:15.802619 containerd[1449]: time="2026-03-11T02:25:15.802436960Z" level=info msg="StartContainer for \"e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d\" returns successfully" Mar 11 02:25:16.114602 systemd[1]: cri-containerd-e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d.scope: Deactivated successfully. Mar 11 02:25:16.373590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d-rootfs.mount: Deactivated successfully. Mar 11 02:25:16.427532 containerd[1449]: time="2026-03-11T02:25:16.419611942Z" level=info msg="shim disconnected" id=e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d namespace=k8s.io Mar 11 02:25:16.428854 containerd[1449]: time="2026-03-11T02:25:16.427751470Z" level=warning msg="cleaning up after shim disconnected" id=e98e5c454b0029cee6aefb824107304b7c9dd97f48c3425428c90a8a18752d6d namespace=k8s.io Mar 11 02:25:16.428854 containerd[1449]: time="2026-03-11T02:25:16.427903847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:17.554122 kubelet[2504]: E0311 02:25:17.553898 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:19.552351 kubelet[2504]: E0311 02:25:19.552252 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:19.637163 containerd[1449]: time="2026-03-11T02:25:19.636938991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:19.638351 containerd[1449]: time="2026-03-11T02:25:19.638277474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 11 02:25:19.640050 containerd[1449]: time="2026-03-11T02:25:19.639923050Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:19.643163 containerd[1449]: time="2026-03-11T02:25:19.643078171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:19.645076 containerd[1449]: time="2026-03-11T02:25:19.644821159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.179689647s" Mar 11 02:25:19.645076 containerd[1449]: time="2026-03-11T02:25:19.644939158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 11 02:25:19.648658 containerd[1449]: time="2026-03-11T02:25:19.646402826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 11 02:25:19.671490 containerd[1449]: time="2026-03-11T02:25:19.670601725Z" level=info msg="CreateContainer within sandbox \"10a347ad3d2a59c6a1af00b88ec9e59d37cbf0d8168105d7bfc5f14fcdd2aa6a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 11 02:25:19.799046 containerd[1449]: time="2026-03-11T02:25:19.798777987Z" level=info msg="CreateContainer within sandbox \"10a347ad3d2a59c6a1af00b88ec9e59d37cbf0d8168105d7bfc5f14fcdd2aa6a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e82c13118dc05885859133628f8c530dbd4de5b18a143ceb59ec32fbf59c7834\"" Mar 11 02:25:19.800359 containerd[1449]: time="2026-03-11T02:25:19.800280827Z" level=info msg="StartContainer for \"e82c13118dc05885859133628f8c530dbd4de5b18a143ceb59ec32fbf59c7834\"" Mar 11 02:25:19.892568 systemd[1]: Started cri-containerd-e82c13118dc05885859133628f8c530dbd4de5b18a143ceb59ec32fbf59c7834.scope - libcontainer container e82c13118dc05885859133628f8c530dbd4de5b18a143ceb59ec32fbf59c7834. Mar 11 02:25:19.997409 containerd[1449]: time="2026-03-11T02:25:19.995755062Z" level=info msg="StartContainer for \"e82c13118dc05885859133628f8c530dbd4de5b18a143ceb59ec32fbf59c7834\" returns successfully" Mar 11 02:25:20.736327 kubelet[2504]: E0311 02:25:20.736205 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:20.751330 kubelet[2504]: I0311 02:25:20.750456 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b96c8dfc8-tfrlb" podStartSLOduration=2.44050835 podStartE2EDuration="7.750432323s" podCreationTimestamp="2026-03-11 02:25:13 +0000 UTC" firstStartedPulling="2026-03-11 02:25:14.336280697 +0000 UTC m=+19.205889442" lastFinishedPulling="2026-03-11 02:25:19.64620466 +0000 UTC m=+24.515813415" observedRunningTime="2026-03-11 02:25:20.75021721 +0000 UTC m=+25.619825975" watchObservedRunningTime="2026-03-11 02:25:20.750432323 +0000 UTC m=+25.620041067" Mar 11 02:25:21.551770 kubelet[2504]: E0311 02:25:21.551562 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:21.751239 kubelet[2504]: I0311 02:25:21.751049 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:25:21.752075 kubelet[2504]: E0311 02:25:21.751854 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:23.551249 kubelet[2504]: E0311 02:25:23.551133 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:25.487501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524042968.mount: Deactivated successfully. Mar 11 02:25:25.552460 containerd[1449]: time="2026-03-11T02:25:25.552280867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:25.553627 containerd[1449]: time="2026-03-11T02:25:25.553532267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 11 02:25:25.555033 containerd[1449]: time="2026-03-11T02:25:25.554834642Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:25.555506 kubelet[2504]: E0311 02:25:25.555346 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:25.558486 containerd[1449]: time="2026-03-11T02:25:25.558284029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:25.559351 containerd[1449]: time="2026-03-11T02:25:25.559253852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 5.912761061s" Mar 11 02:25:25.559351 containerd[1449]: time="2026-03-11T02:25:25.559336984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 11 02:25:25.566335 containerd[1449]: time="2026-03-11T02:25:25.566298885Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 02:25:25.587043 containerd[1449]: time="2026-03-11T02:25:25.586836453Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac\"" Mar 11 02:25:25.588277 containerd[1449]: time="2026-03-11T02:25:25.587724860Z" level=info msg="StartContainer for \"3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac\"" Mar 11 02:25:25.669403 systemd[1]: Started cri-containerd-3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac.scope - libcontainer container 3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac. Mar 11 02:25:25.734636 containerd[1449]: time="2026-03-11T02:25:25.734490371Z" level=info msg="StartContainer for \"3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac\" returns successfully" Mar 11 02:25:25.811086 systemd[1]: cri-containerd-3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac.scope: Deactivated successfully. Mar 11 02:25:25.861088 containerd[1449]: time="2026-03-11T02:25:25.860894410Z" level=info msg="shim disconnected" id=3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac namespace=k8s.io Mar 11 02:25:25.861088 containerd[1449]: time="2026-03-11T02:25:25.861034650Z" level=warning msg="cleaning up after shim disconnected" id=3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac namespace=k8s.io Mar 11 02:25:25.861088 containerd[1449]: time="2026-03-11T02:25:25.861046123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:26.488260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3918b116271aebcb02cdabedcac8b3148d622eeaa8e75016ab489e61bb9a82ac-rootfs.mount: Deactivated successfully. Mar 11 02:25:26.803244 containerd[1449]: time="2026-03-11T02:25:26.803073063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 11 02:25:27.560467 kubelet[2504]: E0311 02:25:27.560298 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:27.802410 kubelet[2504]: I0311 02:25:27.802348 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:25:27.803895 kubelet[2504]: E0311 02:25:27.802856 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:28.806091 kubelet[2504]: E0311 02:25:28.805916 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:29.550671 kubelet[2504]: E0311 02:25:29.550356 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:29.918221 containerd[1449]: time="2026-03-11T02:25:29.917915983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:29.919463 containerd[1449]: time="2026-03-11T02:25:29.919395186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 11 02:25:29.920759 containerd[1449]: time="2026-03-11T02:25:29.920692982Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:29.928138 containerd[1449]: time="2026-03-11T02:25:29.927896925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:29.928925 containerd[1449]: time="2026-03-11T02:25:29.928841815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.125692295s" Mar 11 02:25:29.928925 containerd[1449]: time="2026-03-11T02:25:29.928893251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 11 02:25:29.936595 containerd[1449]: time="2026-03-11T02:25:29.936567841Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 02:25:29.954484 containerd[1449]: time="2026-03-11T02:25:29.954380047Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d\"" Mar 11 02:25:29.956851 containerd[1449]: time="2026-03-11T02:25:29.955401301Z" level=info msg="StartContainer for \"64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d\"" Mar 11 02:25:30.033326 systemd[1]: Started cri-containerd-64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d.scope - libcontainer container 64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d. Mar 11 02:25:30.074538 containerd[1449]: time="2026-03-11T02:25:30.074367146Z" level=info msg="StartContainer for \"64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d\" returns successfully" Mar 11 02:25:30.894647 systemd[1]: cri-containerd-64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d.scope: Deactivated successfully. Mar 11 02:25:30.926190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d-rootfs.mount: Deactivated successfully. Mar 11 02:25:30.945324 kubelet[2504]: I0311 02:25:30.945171 2504 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 11 02:25:31.087815 containerd[1449]: time="2026-03-11T02:25:31.086523298Z" level=info msg="shim disconnected" id=64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d namespace=k8s.io Mar 11 02:25:31.087815 containerd[1449]: time="2026-03-11T02:25:31.086592027Z" level=warning msg="cleaning up after shim disconnected" id=64b058397d247e11541e2eb49e8f30d6546dde77c7061a5539f7730ef425463d namespace=k8s.io Mar 11 02:25:31.087815 containerd[1449]: time="2026-03-11T02:25:31.086605484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:31.098629 kubelet[2504]: I0311 02:25:31.098519 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f6j7\" (UniqueName: \"kubernetes.io/projected/4d15dd84-e281-4325-b44c-bfd2cf49adb4-kube-api-access-8f6j7\") pod \"calico-apiserver-c54d5dff8-grck2\" (UID: \"4d15dd84-e281-4325-b44c-bfd2cf49adb4\") " pod="calico-system/calico-apiserver-c54d5dff8-grck2" Mar 11 02:25:31.098629 kubelet[2504]: I0311 02:25:31.098599 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98451933-bb15-4d85-b793-b6047852572d-calico-apiserver-certs\") pod \"calico-apiserver-c54d5dff8-bhbz4\" (UID: \"98451933-bb15-4d85-b793-b6047852572d\") " pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" Mar 11 02:25:31.099790 kubelet[2504]: I0311 02:25:31.098639 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8mb\" (UniqueName: \"kubernetes.io/projected/98451933-bb15-4d85-b793-b6047852572d-kube-api-access-tz8mb\") pod \"calico-apiserver-c54d5dff8-bhbz4\" (UID: \"98451933-bb15-4d85-b793-b6047852572d\") " pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" Mar 11 02:25:31.099790 kubelet[2504]: I0311 02:25:31.098670 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d15dd84-e281-4325-b44c-bfd2cf49adb4-calico-apiserver-certs\") pod \"calico-apiserver-c54d5dff8-grck2\" (UID: \"4d15dd84-e281-4325-b44c-bfd2cf49adb4\") " pod="calico-system/calico-apiserver-c54d5dff8-grck2" Mar 11 02:25:31.106931 systemd[1]: Created slice kubepods-besteffort-pod4d15dd84_e281_4325_b44c_bfd2cf49adb4.slice - libcontainer container kubepods-besteffort-pod4d15dd84_e281_4325_b44c_bfd2cf49adb4.slice. Mar 11 02:25:31.127836 systemd[1]: Created slice kubepods-besteffort-pod98451933_bb15_4d85_b793_b6047852572d.slice - libcontainer container kubepods-besteffort-pod98451933_bb15_4d85_b793_b6047852572d.slice. Mar 11 02:25:31.145574 systemd[1]: Created slice kubepods-besteffort-pod1a540df5_4f10_4a1a_9034_8b8ac7db4bef.slice - libcontainer container kubepods-besteffort-pod1a540df5_4f10_4a1a_9034_8b8ac7db4bef.slice. Mar 11 02:25:31.158535 systemd[1]: Created slice kubepods-besteffort-pod3e4ab64e_7f0b_42f6_95eb_a75c21c49b91.slice - libcontainer container kubepods-besteffort-pod3e4ab64e_7f0b_42f6_95eb_a75c21c49b91.slice. Mar 11 02:25:31.167785 systemd[1]: Created slice kubepods-burstable-podfbcb37ad_a949_4830_a43a_9cfdd14b9b96.slice - libcontainer container kubepods-burstable-podfbcb37ad_a949_4830_a43a_9cfdd14b9b96.slice. Mar 11 02:25:31.177635 systemd[1]: Created slice kubepods-besteffort-pod22c779a8_71f3_4720_a430_6dad918fffbd.slice - libcontainer container kubepods-besteffort-pod22c779a8_71f3_4720_a430_6dad918fffbd.slice. Mar 11 02:25:31.183560 systemd[1]: Created slice kubepods-burstable-pod1d28eda4_868c_4032_bb0a_0cda62dbcd9a.slice - libcontainer container kubepods-burstable-pod1d28eda4_868c_4032_bb0a_0cda62dbcd9a.slice. Mar 11 02:25:31.199065 kubelet[2504]: I0311 02:25:31.198849 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22c779a8-71f3-4720-a430-6dad918fffbd-tigera-ca-bundle\") pod \"calico-kube-controllers-685f947667-xs4f9\" (UID: \"22c779a8-71f3-4720-a430-6dad918fffbd\") " pod="calico-system/calico-kube-controllers-685f947667-xs4f9" Mar 11 02:25:31.199318 kubelet[2504]: I0311 02:25:31.199166 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdkj\" (UniqueName: \"kubernetes.io/projected/fbcb37ad-a949-4830-a43a-9cfdd14b9b96-kube-api-access-mjdkj\") pod \"coredns-66bc5c9577-bcsbm\" (UID: \"fbcb37ad-a949-4830-a43a-9cfdd14b9b96\") " pod="kube-system/coredns-66bc5c9577-bcsbm" Mar 11 02:25:31.199318 kubelet[2504]: I0311 02:25:31.199202 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6mc\" (UniqueName: \"kubernetes.io/projected/22c779a8-71f3-4720-a430-6dad918fffbd-kube-api-access-xt6mc\") pod \"calico-kube-controllers-685f947667-xs4f9\" (UID: \"22c779a8-71f3-4720-a430-6dad918fffbd\") " pod="calico-system/calico-kube-controllers-685f947667-xs4f9" Mar 11 02:25:31.199318 kubelet[2504]: I0311 02:25:31.199218 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-nginx-config\") pod \"whisker-5fd96b98bb-ttsjm\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " pod="calico-system/whisker-5fd96b98bb-ttsjm" Mar 11 02:25:31.199318 kubelet[2504]: I0311 02:25:31.199232 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d28eda4-868c-4032-bb0a-0cda62dbcd9a-config-volume\") pod \"coredns-66bc5c9577-pgwpq\" (UID: \"1d28eda4-868c-4032-bb0a-0cda62dbcd9a\") " pod="kube-system/coredns-66bc5c9577-pgwpq" Mar 11 02:25:31.199677 kubelet[2504]: I0311 02:25:31.199563 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktrn\" (UniqueName: \"kubernetes.io/projected/3e4ab64e-7f0b-42f6-95eb-a75c21c49b91-kube-api-access-xktrn\") pod \"goldmane-cccfbd5cf-67dgv\" (UID: \"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91\") " pod="calico-system/goldmane-cccfbd5cf-67dgv" Mar 11 02:25:31.199677 kubelet[2504]: I0311 02:25:31.199644 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-backend-key-pair\") pod \"whisker-5fd96b98bb-ttsjm\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " pod="calico-system/whisker-5fd96b98bb-ttsjm" Mar 11 02:25:31.199677 kubelet[2504]: I0311 02:25:31.199673 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sth8t\" (UniqueName: \"kubernetes.io/projected/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-kube-api-access-sth8t\") pod \"whisker-5fd96b98bb-ttsjm\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " pod="calico-system/whisker-5fd96b98bb-ttsjm" Mar 11 02:25:31.199930 kubelet[2504]: I0311 02:25:31.199809 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3e4ab64e-7f0b-42f6-95eb-a75c21c49b91-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-67dgv\" (UID: \"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91\") " pod="calico-system/goldmane-cccfbd5cf-67dgv" Mar 11 02:25:31.199930 kubelet[2504]: I0311 02:25:31.199893 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj79c\" (UniqueName: \"kubernetes.io/projected/1d28eda4-868c-4032-bb0a-0cda62dbcd9a-kube-api-access-gj79c\") pod \"coredns-66bc5c9577-pgwpq\" (UID: \"1d28eda4-868c-4032-bb0a-0cda62dbcd9a\") " pod="kube-system/coredns-66bc5c9577-pgwpq" Mar 11 02:25:31.199930 kubelet[2504]: I0311 02:25:31.199918 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e4ab64e-7f0b-42f6-95eb-a75c21c49b91-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-67dgv\" (UID: \"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91\") " pod="calico-system/goldmane-cccfbd5cf-67dgv" Mar 11 02:25:31.200737 kubelet[2504]: I0311 02:25:31.200661 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-ca-bundle\") pod \"whisker-5fd96b98bb-ttsjm\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " pod="calico-system/whisker-5fd96b98bb-ttsjm" Mar 11 02:25:31.200806 kubelet[2504]: I0311 02:25:31.200742 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ab64e-7f0b-42f6-95eb-a75c21c49b91-config\") pod \"goldmane-cccfbd5cf-67dgv\" (UID: \"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91\") " pod="calico-system/goldmane-cccfbd5cf-67dgv" Mar 11 02:25:31.200806 kubelet[2504]: I0311 02:25:31.200791 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbcb37ad-a949-4830-a43a-9cfdd14b9b96-config-volume\") pod \"coredns-66bc5c9577-bcsbm\" (UID: \"fbcb37ad-a949-4830-a43a-9cfdd14b9b96\") " pod="kube-system/coredns-66bc5c9577-bcsbm" Mar 11 02:25:31.425918 containerd[1449]: time="2026-03-11T02:25:31.424658226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-grck2,Uid:4d15dd84-e281-4325-b44c-bfd2cf49adb4,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:31.444669 containerd[1449]: time="2026-03-11T02:25:31.444619969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-bhbz4,Uid:98451933-bb15-4d85-b793-b6047852572d,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:31.459191 containerd[1449]: time="2026-03-11T02:25:31.459043531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd96b98bb-ttsjm,Uid:1a540df5-4f10-4a1a-9034-8b8ac7db4bef,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:31.469773 containerd[1449]: time="2026-03-11T02:25:31.468846793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-67dgv,Uid:3e4ab64e-7f0b-42f6-95eb-a75c21c49b91,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:31.479645 kubelet[2504]: E0311 02:25:31.479515 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:31.484439 containerd[1449]: time="2026-03-11T02:25:31.484228220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bcsbm,Uid:fbcb37ad-a949-4830-a43a-9cfdd14b9b96,Namespace:kube-system,Attempt:0,}" Mar 11 02:25:31.493744 containerd[1449]: time="2026-03-11T02:25:31.493419246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-685f947667-xs4f9,Uid:22c779a8-71f3-4720-a430-6dad918fffbd,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:31.495018 kubelet[2504]: E0311 02:25:31.494832 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:31.497873 containerd[1449]: time="2026-03-11T02:25:31.497669380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pgwpq,Uid:1d28eda4-868c-4032-bb0a-0cda62dbcd9a,Namespace:kube-system,Attempt:0,}" Mar 11 02:25:31.565672 systemd[1]: Created slice kubepods-besteffort-podadc87660_fa32_4458_aba8_d62f16053a90.slice - libcontainer container kubepods-besteffort-podadc87660_fa32_4458_aba8_d62f16053a90.slice. Mar 11 02:25:31.575102 containerd[1449]: time="2026-03-11T02:25:31.574684371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtltt,Uid:adc87660-fa32-4458-aba8-d62f16053a90,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:31.730804 containerd[1449]: time="2026-03-11T02:25:31.730653888Z" level=error msg="Failed to destroy network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.733065 containerd[1449]: time="2026-03-11T02:25:31.732900173Z" level=error msg="encountered an error cleaning up failed sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.733259 containerd[1449]: time="2026-03-11T02:25:31.733221425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-grck2,Uid:4d15dd84-e281-4325-b44c-bfd2cf49adb4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.767427 containerd[1449]: time="2026-03-11T02:25:31.767083193Z" level=error msg="Failed to destroy network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.767698 containerd[1449]: time="2026-03-11T02:25:31.767613589Z" level=error msg="encountered an error cleaning up failed sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.767755 containerd[1449]: time="2026-03-11T02:25:31.767698661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd96b98bb-ttsjm,Uid:1a540df5-4f10-4a1a-9034-8b8ac7db4bef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.772312 containerd[1449]: time="2026-03-11T02:25:31.772273810Z" level=error msg="Failed to destroy network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.773239 containerd[1449]: time="2026-03-11T02:25:31.773146419Z" level=error msg="encountered an error cleaning up failed sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.773239 containerd[1449]: time="2026-03-11T02:25:31.773191300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-67dgv,Uid:3e4ab64e-7f0b-42f6-95eb-a75c21c49b91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.780104 kubelet[2504]: E0311 02:25:31.779557 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.780104 kubelet[2504]: E0311 02:25:31.779576 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.780104 kubelet[2504]: E0311 02:25:31.779652 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-67dgv" Mar 11 02:25:31.780104 kubelet[2504]: E0311 02:25:31.779683 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-67dgv" Mar 11 02:25:31.780247 kubelet[2504]: E0311 02:25:31.779789 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c54d5dff8-grck2" Mar 11 02:25:31.780247 kubelet[2504]: E0311 02:25:31.779814 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c54d5dff8-grck2" Mar 11 02:25:31.780247 kubelet[2504]: E0311 02:25:31.779818 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-67dgv_calico-system(3e4ab64e-7f0b-42f6-95eb-a75c21c49b91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-67dgv_calico-system(3e4ab64e-7f0b-42f6-95eb-a75c21c49b91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-67dgv" podUID="3e4ab64e-7f0b-42f6-95eb-a75c21c49b91" Mar 11 02:25:31.780415 kubelet[2504]: E0311 02:25:31.779861 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c54d5dff8-grck2_calico-system(4d15dd84-e281-4325-b44c-bfd2cf49adb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c54d5dff8-grck2_calico-system(4d15dd84-e281-4325-b44c-bfd2cf49adb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c54d5dff8-grck2" podUID="4d15dd84-e281-4325-b44c-bfd2cf49adb4" Mar 11 02:25:31.780415 kubelet[2504]: E0311 02:25:31.779900 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.780415 kubelet[2504]: E0311 02:25:31.779915 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fd96b98bb-ttsjm" Mar 11 02:25:31.780553 kubelet[2504]: E0311 02:25:31.779928 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fd96b98bb-ttsjm" Mar 11 02:25:31.780553 kubelet[2504]: E0311 02:25:31.780123 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fd96b98bb-ttsjm_calico-system(1a540df5-4f10-4a1a-9034-8b8ac7db4bef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fd96b98bb-ttsjm_calico-system(1a540df5-4f10-4a1a-9034-8b8ac7db4bef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fd96b98bb-ttsjm" podUID="1a540df5-4f10-4a1a-9034-8b8ac7db4bef" Mar 11 02:25:31.792301 containerd[1449]: time="2026-03-11T02:25:31.792065938Z" level=error msg="Failed to destroy network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.794639 containerd[1449]: time="2026-03-11T02:25:31.794572162Z" level=error msg="encountered an error cleaning up failed sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.794639 containerd[1449]: time="2026-03-11T02:25:31.794635680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-bhbz4,Uid:98451933-bb15-4d85-b793-b6047852572d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.796052 containerd[1449]: time="2026-03-11T02:25:31.795166656Z" level=error msg="Failed to destroy network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.796789 kubelet[2504]: E0311 02:25:31.796757 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.796918 kubelet[2504]: E0311 02:25:31.796893 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" Mar 11 02:25:31.798321 kubelet[2504]: E0311 02:25:31.797077 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" Mar 11 02:25:31.798321 kubelet[2504]: E0311 02:25:31.797566 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.798321 kubelet[2504]: E0311 02:25:31.797611 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pgwpq" Mar 11 02:25:31.798321 kubelet[2504]: E0311 02:25:31.797632 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pgwpq" Mar 11 02:25:31.798508 containerd[1449]: time="2026-03-11T02:25:31.797160910Z" level=error msg="encountered an error cleaning up failed sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.798508 containerd[1449]: time="2026-03-11T02:25:31.797195992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pgwpq,Uid:1d28eda4-868c-4032-bb0a-0cda62dbcd9a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.798552 kubelet[2504]: E0311 02:25:31.797710 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pgwpq_kube-system(1d28eda4-868c-4032-bb0a-0cda62dbcd9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pgwpq_kube-system(1d28eda4-868c-4032-bb0a-0cda62dbcd9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pgwpq" podUID="1d28eda4-868c-4032-bb0a-0cda62dbcd9a" Mar 11 02:25:31.798868 kubelet[2504]: E0311 02:25:31.798673 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c54d5dff8-bhbz4_calico-system(98451933-bb15-4d85-b793-b6047852572d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c54d5dff8-bhbz4_calico-system(98451933-bb15-4d85-b793-b6047852572d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" podUID="98451933-bb15-4d85-b793-b6047852572d" Mar 11 02:25:31.812483 containerd[1449]: time="2026-03-11T02:25:31.812300597Z" level=error msg="Failed to destroy network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.813246 containerd[1449]: time="2026-03-11T02:25:31.813081510Z" level=error msg="encountered an error cleaning up failed sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.813246 containerd[1449]: time="2026-03-11T02:25:31.813135850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bcsbm,Uid:fbcb37ad-a949-4830-a43a-9cfdd14b9b96,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.814825 kubelet[2504]: E0311 02:25:31.813420 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.814825 kubelet[2504]: E0311 02:25:31.813471 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bcsbm" Mar 11 02:25:31.814825 kubelet[2504]: E0311 02:25:31.813488 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bcsbm" Mar 11 02:25:31.815326 kubelet[2504]: E0311 02:25:31.813530 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-bcsbm_kube-system(fbcb37ad-a949-4830-a43a-9cfdd14b9b96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-bcsbm_kube-system(fbcb37ad-a949-4830-a43a-9cfdd14b9b96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bcsbm" podUID="fbcb37ad-a949-4830-a43a-9cfdd14b9b96" Mar 11 02:25:31.818926 kubelet[2504]: I0311 02:25:31.818780 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:31.821815 containerd[1449]: time="2026-03-11T02:25:31.821732610Z" level=error msg="Failed to destroy network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.823742 containerd[1449]: time="2026-03-11T02:25:31.823607040Z" level=error msg="encountered an error cleaning up failed sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.823742 containerd[1449]: time="2026-03-11T02:25:31.823693045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtltt,Uid:adc87660-fa32-4458-aba8-d62f16053a90,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.824080 kubelet[2504]: E0311 02:25:31.823999 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.824080 kubelet[2504]: E0311 02:25:31.824064 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:31.824080 kubelet[2504]: E0311 02:25:31.824084 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtltt" Mar 11 02:25:31.824225 kubelet[2504]: E0311 02:25:31.824121 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtltt_calico-system(adc87660-fa32-4458-aba8-d62f16053a90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtltt_calico-system(adc87660-fa32-4458-aba8-d62f16053a90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtltt" podUID="adc87660-fa32-4458-aba8-d62f16053a90" Mar 11 02:25:31.827505 containerd[1449]: time="2026-03-11T02:25:31.827279598Z" level=error msg="Failed to destroy network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.828315 containerd[1449]: time="2026-03-11T02:25:31.828222269Z" level=error msg="encountered an error cleaning up failed sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.828315 containerd[1449]: time="2026-03-11T02:25:31.828298465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-685f947667-xs4f9,Uid:22c779a8-71f3-4720-a430-6dad918fffbd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.830885 kubelet[2504]: E0311 02:25:31.830820 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.831516 kubelet[2504]: E0311 02:25:31.831418 2504 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-685f947667-xs4f9" Mar 11 02:25:31.831516 kubelet[2504]: E0311 02:25:31.831445 2504 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-685f947667-xs4f9" Mar 11 02:25:31.831516 kubelet[2504]: E0311 02:25:31.831483 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-685f947667-xs4f9_calico-system(22c779a8-71f3-4720-a430-6dad918fffbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-685f947667-xs4f9_calico-system(22c779a8-71f3-4720-a430-6dad918fffbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-685f947667-xs4f9" podUID="22c779a8-71f3-4720-a430-6dad918fffbd" Mar 11 02:25:31.832817 kubelet[2504]: I0311 02:25:31.832800 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:31.837062 kubelet[2504]: I0311 02:25:31.836494 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:31.846265 containerd[1449]: time="2026-03-11T02:25:31.846193113Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 02:25:31.847450 kubelet[2504]: I0311 02:25:31.846624 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:31.851679 containerd[1449]: time="2026-03-11T02:25:31.850797836Z" level=info msg="StopPodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\"" Mar 11 02:25:31.851679 containerd[1449]: time="2026-03-11T02:25:31.850857022Z" level=info msg="StopPodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\"" Mar 11 02:25:31.854441 containerd[1449]: time="2026-03-11T02:25:31.854291086Z" level=info msg="StopPodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\"" Mar 11 02:25:31.856929 containerd[1449]: time="2026-03-11T02:25:31.856902712Z" level=info msg="Ensure that sandbox e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28 in task-service has been cleanup successfully" Mar 11 02:25:31.857502 containerd[1449]: time="2026-03-11T02:25:31.856912787Z" level=info msg="Ensure that sandbox 6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e in task-service has been cleanup successfully" Mar 11 02:25:31.859912 containerd[1449]: time="2026-03-11T02:25:31.859800900Z" level=info msg="StopPodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\"" Mar 11 02:25:31.860140 containerd[1449]: time="2026-03-11T02:25:31.860052941Z" level=info msg="Ensure that sandbox 8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978 in task-service has been cleanup successfully" Mar 11 02:25:31.864027 containerd[1449]: time="2026-03-11T02:25:31.862654867Z" level=info msg="Ensure that sandbox f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c in task-service has been cleanup successfully" Mar 11 02:25:31.869575 kubelet[2504]: I0311 02:25:31.869506 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:31.871696 containerd[1449]: time="2026-03-11T02:25:31.870574517Z" level=info msg="StopPodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\"" Mar 11 02:25:31.871696 containerd[1449]: time="2026-03-11T02:25:31.870836950Z" level=info msg="Ensure that sandbox e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d in task-service has been cleanup successfully" Mar 11 02:25:31.885448 kubelet[2504]: I0311 02:25:31.885411 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:31.886298 containerd[1449]: time="2026-03-11T02:25:31.886273595Z" level=info msg="StopPodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\"" Mar 11 02:25:31.886640 containerd[1449]: time="2026-03-11T02:25:31.886620498Z" level=info msg="Ensure that sandbox d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a in task-service has been cleanup successfully" Mar 11 02:25:31.919321 containerd[1449]: time="2026-03-11T02:25:31.919180217Z" level=info msg="CreateContainer within sandbox \"ea87ed6382f0e1620f10dcfab1890e0e37aabcc808ce77ce445a20727885398e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e2468a882b94cc11e5f079f8c274f3a6d3a0caf6933daa73f9a3a68bafda94d9\"" Mar 11 02:25:31.923419 containerd[1449]: time="2026-03-11T02:25:31.920908594Z" level=info msg="StartContainer for \"e2468a882b94cc11e5f079f8c274f3a6d3a0caf6933daa73f9a3a68bafda94d9\"" Mar 11 02:25:31.961313 containerd[1449]: time="2026-03-11T02:25:31.960585655Z" level=error msg="StopPodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" failed" error="failed to destroy network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.962228 containerd[1449]: time="2026-03-11T02:25:31.962133194Z" level=error msg="StopPodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" failed" error="failed to destroy network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.962862 kubelet[2504]: E0311 02:25:31.962812 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:31.963876 kubelet[2504]: E0311 02:25:31.963630 2504 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d"} Mar 11 02:25:31.963876 kubelet[2504]: E0311 02:25:31.963691 2504 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98451933-bb15-4d85-b793-b6047852572d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 02:25:31.963876 kubelet[2504]: E0311 02:25:31.963732 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98451933-bb15-4d85-b793-b6047852572d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" podUID="98451933-bb15-4d85-b793-b6047852572d" Mar 11 02:25:31.963876 kubelet[2504]: E0311 02:25:31.963478 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:31.963876 kubelet[2504]: E0311 02:25:31.963780 2504 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e"} Mar 11 02:25:31.964355 kubelet[2504]: E0311 02:25:31.963813 2504 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d15dd84-e281-4325-b44c-bfd2cf49adb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 02:25:31.964355 kubelet[2504]: E0311 02:25:31.963840 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d15dd84-e281-4325-b44c-bfd2cf49adb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c54d5dff8-grck2" podUID="4d15dd84-e281-4325-b44c-bfd2cf49adb4" Mar 11 02:25:31.965557 containerd[1449]: time="2026-03-11T02:25:31.965524132Z" level=error msg="StopPodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" failed" error="failed to destroy network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.965836 kubelet[2504]: E0311 02:25:31.965815 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:31.965910 kubelet[2504]: E0311 02:25:31.965897 2504 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c"} Mar 11 02:25:31.966045 kubelet[2504]: E0311 02:25:31.966029 2504 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbcb37ad-a949-4830-a43a-9cfdd14b9b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 02:25:31.966268 kubelet[2504]: E0311 02:25:31.966245 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbcb37ad-a949-4830-a43a-9cfdd14b9b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bcsbm" podUID="fbcb37ad-a949-4830-a43a-9cfdd14b9b96" Mar 11 02:25:31.978364 containerd[1449]: time="2026-03-11T02:25:31.978305059Z" level=error msg="StopPodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" failed" error="failed to destroy network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.979147 containerd[1449]: time="2026-03-11T02:25:31.979102156Z" level=error msg="StopPodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" failed" error="failed to destroy network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.980106 kubelet[2504]: E0311 02:25:31.979333 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:31.980106 kubelet[2504]: E0311 02:25:31.979438 2504 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28"} Mar 11 02:25:31.980106 kubelet[2504]: E0311 02:25:31.979469 2504 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 02:25:31.980106 kubelet[2504]: E0311 02:25:31.979614 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-67dgv" podUID="3e4ab64e-7f0b-42f6-95eb-a75c21c49b91" Mar 11 02:25:31.980417 kubelet[2504]: E0311 02:25:31.979789 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:31.980417 kubelet[2504]: E0311 02:25:31.979812 2504 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978"} Mar 11 02:25:31.980417 kubelet[2504]: E0311 02:25:31.979829 2504 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 02:25:31.980417 kubelet[2504]: E0311 02:25:31.979847 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fd96b98bb-ttsjm" podUID="1a540df5-4f10-4a1a-9034-8b8ac7db4bef" Mar 11 02:25:31.997106 containerd[1449]: time="2026-03-11T02:25:31.996829367Z" level=error msg="StopPodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" failed" error="failed to destroy network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:25:31.998244 kubelet[2504]: E0311 02:25:31.997431 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:31.998244 kubelet[2504]: E0311 02:25:31.997477 2504 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a"} Mar 11 02:25:31.998244 kubelet[2504]: E0311 02:25:31.997504 2504 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d28eda4-868c-4032-bb0a-0cda62dbcd9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 02:25:31.998244 kubelet[2504]: E0311 02:25:31.997536 2504 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d28eda4-868c-4032-bb0a-0cda62dbcd9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pgwpq" podUID="1d28eda4-868c-4032-bb0a-0cda62dbcd9a" Mar 11 02:25:32.010302 systemd[1]: Started cri-containerd-e2468a882b94cc11e5f079f8c274f3a6d3a0caf6933daa73f9a3a68bafda94d9.scope - libcontainer container e2468a882b94cc11e5f079f8c274f3a6d3a0caf6933daa73f9a3a68bafda94d9. Mar 11 02:25:32.060843 containerd[1449]: time="2026-03-11T02:25:32.060651845Z" level=info msg="StartContainer for \"e2468a882b94cc11e5f079f8c274f3a6d3a0caf6933daa73f9a3a68bafda94d9\" returns successfully" Mar 11 02:25:32.891669 kubelet[2504]: I0311 02:25:32.891609 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:32.896035 kubelet[2504]: I0311 02:25:32.893827 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:32.896134 containerd[1449]: time="2026-03-11T02:25:32.894595898Z" level=info msg="StopPodSandbox for \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\"" Mar 11 02:25:32.896134 containerd[1449]: time="2026-03-11T02:25:32.894647246Z" level=info msg="StopPodSandbox for \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\"" Mar 11 02:25:32.896134 containerd[1449]: time="2026-03-11T02:25:32.894761553Z" level=info msg="Ensure that sandbox 29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9 in task-service has been cleanup successfully" Mar 11 02:25:32.902162 containerd[1449]: time="2026-03-11T02:25:32.901493748Z" level=info msg="StopPodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\"" Mar 11 02:25:32.906119 containerd[1449]: time="2026-03-11T02:25:32.906090725Z" level=info msg="Ensure that sandbox 7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28 in task-service has been cleanup successfully" Mar 11 02:25:32.968877 kubelet[2504]: I0311 02:25:32.968290 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-clhc6" podStartSLOduration=4.09307882 podStartE2EDuration="19.968268923s" podCreationTimestamp="2026-03-11 02:25:13 +0000 UTC" firstStartedPulling="2026-03-11 02:25:14.055244921 +0000 UTC m=+18.924853665" lastFinishedPulling="2026-03-11 02:25:29.930435023 +0000 UTC m=+34.800043768" observedRunningTime="2026-03-11 02:25:32.94611117 +0000 UTC m=+37.815719994" watchObservedRunningTime="2026-03-11 02:25:32.968268923 +0000 UTC m=+37.837877678" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.050 [INFO][3791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.050 [INFO][3791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" iface="eth0" netns="/var/run/netns/cni-e7d2f4ce-5f06-a8a7-dbc8-6e913a2d2bb7" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.050 [INFO][3791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" iface="eth0" netns="/var/run/netns/cni-e7d2f4ce-5f06-a8a7-dbc8-6e913a2d2bb7" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.051 [INFO][3791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" iface="eth0" netns="/var/run/netns/cni-e7d2f4ce-5f06-a8a7-dbc8-6e913a2d2bb7" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.051 [INFO][3791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.051 [INFO][3791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.121 [INFO][3848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.122 [INFO][3848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.122 [INFO][3848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.152 [WARNING][3848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.152 [INFO][3848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.159 [INFO][3848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:33.176756 containerd[1449]: 2026-03-11 02:25:33.168 [INFO][3791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:33.179848 systemd[1]: run-netns-cni\x2de7d2f4ce\x2d5f06\x2da8a7\x2ddbc8\x2d6e913a2d2bb7.mount: Deactivated successfully. Mar 11 02:25:33.182526 containerd[1449]: time="2026-03-11T02:25:33.182339697Z" level=info msg="TearDown network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\" successfully" Mar 11 02:25:33.182526 containerd[1449]: time="2026-03-11T02:25:33.182380258Z" level=info msg="StopPodSandbox for \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\" returns successfully" Mar 11 02:25:33.188409 containerd[1449]: time="2026-03-11T02:25:33.188230252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-685f947667-xs4f9,Uid:22c779a8-71f3-4720-a430-6dad918fffbd,Namespace:calico-system,Attempt:1,}" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.101 [INFO][3810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.102 [INFO][3810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" iface="eth0" netns="/var/run/netns/cni-841cbd68-5710-cdb0-8643-2a38ac67ada2" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.104 [INFO][3810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" iface="eth0" netns="/var/run/netns/cni-841cbd68-5710-cdb0-8643-2a38ac67ada2" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.105 [INFO][3810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" iface="eth0" netns="/var/run/netns/cni-841cbd68-5710-cdb0-8643-2a38ac67ada2" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.106 [INFO][3810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.106 [INFO][3810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.177 [INFO][3865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.177 [INFO][3865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.177 [INFO][3865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.188 [WARNING][3865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.189 [INFO][3865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.193 [INFO][3865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:33.199105 containerd[1449]: 2026-03-11 02:25:33.196 [INFO][3810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:33.203413 containerd[1449]: time="2026-03-11T02:25:33.203293717Z" level=info msg="TearDown network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\" successfully" Mar 11 02:25:33.203413 containerd[1449]: time="2026-03-11T02:25:33.203372747Z" level=info msg="StopPodSandbox for \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\" returns successfully" Mar 11 02:25:33.204929 systemd[1]: run-netns-cni\x2d841cbd68\x2d5710\x2dcdb0\x2d8643\x2d2a38ac67ada2.mount: Deactivated successfully. Mar 11 02:25:33.219012 containerd[1449]: time="2026-03-11T02:25:33.218920033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtltt,Uid:adc87660-fa32-4458-aba8-d62f16053a90,Namespace:calico-system,Attempt:1,}" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.081 [INFO][3806] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.081 [INFO][3806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" iface="eth0" netns="/var/run/netns/cni-31a0f701-de1c-4786-9bcb-441b89ee8b49" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.082 [INFO][3806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" iface="eth0" netns="/var/run/netns/cni-31a0f701-de1c-4786-9bcb-441b89ee8b49" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.085 [INFO][3806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" iface="eth0" netns="/var/run/netns/cni-31a0f701-de1c-4786-9bcb-441b89ee8b49" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.085 [INFO][3806] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.085 [INFO][3806] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.184 [INFO][3858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.185 [INFO][3858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.193 [INFO][3858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.204 [WARNING][3858] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.204 [INFO][3858] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.207 [INFO][3858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:33.228788 containerd[1449]: 2026-03-11 02:25:33.222 [INFO][3806] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:33.231669 systemd[1]: run-netns-cni\x2d31a0f701\x2dde1c\x2d4786\x2d9bcb\x2d441b89ee8b49.mount: Deactivated successfully. Mar 11 02:25:33.232152 containerd[1449]: time="2026-03-11T02:25:33.232079706Z" level=info msg="TearDown network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" successfully" Mar 11 02:25:33.232152 containerd[1449]: time="2026-03-11T02:25:33.232104577Z" level=info msg="StopPodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" returns successfully" Mar 11 02:25:33.325207 kubelet[2504]: I0311 02:25:33.325060 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-nginx-config\") pod \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " Mar 11 02:25:33.325207 kubelet[2504]: I0311 02:25:33.325165 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-backend-key-pair\") pod \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " Mar 11 02:25:33.325207 kubelet[2504]: I0311 02:25:33.325208 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-ca-bundle\") pod \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " Mar 11 02:25:33.325387 kubelet[2504]: I0311 02:25:33.325245 2504 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sth8t\" (UniqueName: \"kubernetes.io/projected/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-kube-api-access-sth8t\") pod \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\" (UID: \"1a540df5-4f10-4a1a-9034-8b8ac7db4bef\") " Mar 11 02:25:33.327105 kubelet[2504]: I0311 02:25:33.325880 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "1a540df5-4f10-4a1a-9034-8b8ac7db4bef" (UID: "1a540df5-4f10-4a1a-9034-8b8ac7db4bef"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:25:33.327869 kubelet[2504]: I0311 02:25:33.327827 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1a540df5-4f10-4a1a-9034-8b8ac7db4bef" (UID: "1a540df5-4f10-4a1a-9034-8b8ac7db4bef"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:25:33.340233 kubelet[2504]: I0311 02:25:33.340194 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1a540df5-4f10-4a1a-9034-8b8ac7db4bef" (UID: "1a540df5-4f10-4a1a-9034-8b8ac7db4bef"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 02:25:33.342768 systemd[1]: var-lib-kubelet-pods-1a540df5\x2d4f10\x2d4a1a\x2d9034\x2d8b8ac7db4bef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsth8t.mount: Deactivated successfully. Mar 11 02:25:33.343508 kubelet[2504]: I0311 02:25:33.342934 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-kube-api-access-sth8t" (OuterVolumeSpecName: "kube-api-access-sth8t") pod "1a540df5-4f10-4a1a-9034-8b8ac7db4bef" (UID: "1a540df5-4f10-4a1a-9034-8b8ac7db4bef"). InnerVolumeSpecName "kube-api-access-sth8t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:25:33.342915 systemd[1]: var-lib-kubelet-pods-1a540df5\x2d4f10\x2d4a1a\x2d9034\x2d8b8ac7db4bef-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 11 02:25:33.426281 kubelet[2504]: I0311 02:25:33.426240 2504 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:33.427390 kubelet[2504]: I0311 02:25:33.427217 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:33.427390 kubelet[2504]: I0311 02:25:33.427284 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:33.427390 kubelet[2504]: I0311 02:25:33.427294 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sth8t\" (UniqueName: \"kubernetes.io/projected/1a540df5-4f10-4a1a-9034-8b8ac7db4bef-kube-api-access-sth8t\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:33.444810 systemd-networkd[1375]: caliaabb350c4da: Link UP Mar 11 02:25:33.446881 systemd-networkd[1375]: caliaabb350c4da: Gained carrier Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.285 [ERROR][3883] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.305 [INFO][3883] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0 calico-kube-controllers-685f947667- calico-system 22c779a8-71f3-4720-a430-6dad918fffbd 958 0 2026-03-11 02:25:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:685f947667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-685f947667-xs4f9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaabb350c4da [] [] }} ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.305 [INFO][3883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.366 [INFO][3907] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" HandleID="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.377 [INFO][3907] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" HandleID="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-685f947667-xs4f9", "timestamp":"2026-03-11 02:25:33.366839465 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00048e6e0)} Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.377 [INFO][3907] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.377 [INFO][3907] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.377 [INFO][3907] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.383 [INFO][3907] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.394 [INFO][3907] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.401 [INFO][3907] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.405 [INFO][3907] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.408 [INFO][3907] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.408 [INFO][3907] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.410 [INFO][3907] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922 Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.415 [INFO][3907] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.422 [INFO][3907] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.422 [INFO][3907] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" host="localhost" Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.423 [INFO][3907] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:33.470192 containerd[1449]: 2026-03-11 02:25:33.423 [INFO][3907] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" HandleID="k8s-pod-network.b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.471413 containerd[1449]: 2026-03-11 02:25:33.425 [INFO][3883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0", GenerateName:"calico-kube-controllers-685f947667-", Namespace:"calico-system", SelfLink:"", UID:"22c779a8-71f3-4720-a430-6dad918fffbd", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"685f947667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-685f947667-xs4f9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaabb350c4da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:33.471413 containerd[1449]: 2026-03-11 02:25:33.426 [INFO][3883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.471413 containerd[1449]: 2026-03-11 02:25:33.426 [INFO][3883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaabb350c4da ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.471413 containerd[1449]: 2026-03-11 02:25:33.448 [INFO][3883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.471413 containerd[1449]: 2026-03-11 02:25:33.448 [INFO][3883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0", GenerateName:"calico-kube-controllers-685f947667-", Namespace:"calico-system", SelfLink:"", UID:"22c779a8-71f3-4720-a430-6dad918fffbd", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"685f947667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922", Pod:"calico-kube-controllers-685f947667-xs4f9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaabb350c4da", MAC:"fa:db:02:82:0d:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:33.471413 containerd[1449]: 2026-03-11 02:25:33.464 [INFO][3883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922" Namespace="calico-system" Pod="calico-kube-controllers-685f947667-xs4f9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:33.506219 containerd[1449]: time="2026-03-11T02:25:33.505440314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:33.506219 containerd[1449]: time="2026-03-11T02:25:33.505492209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:33.506219 containerd[1449]: time="2026-03-11T02:25:33.505505245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:33.506219 containerd[1449]: time="2026-03-11T02:25:33.505577381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:33.540400 systemd-networkd[1375]: cali4e9c80e5680: Link UP Mar 11 02:25:33.542245 systemd-networkd[1375]: cali4e9c80e5680: Gained carrier Mar 11 02:25:33.542763 systemd[1]: Started cri-containerd-b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922.scope - libcontainer container b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922. Mar 11 02:25:33.560539 systemd[1]: Removed slice kubepods-besteffort-pod1a540df5_4f10_4a1a_9034_8b8ac7db4bef.slice - libcontainer container kubepods-besteffort-pod1a540df5_4f10_4a1a_9034_8b8ac7db4bef.slice. Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.316 [ERROR][3890] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.342 [INFO][3890] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wtltt-eth0 csi-node-driver- calico-system adc87660-fa32-4458-aba8-d62f16053a90 960 0 2026-03-11 02:25:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wtltt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e9c80e5680 [] [] }} ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.342 [INFO][3890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.390 [INFO][3917] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" HandleID="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.403 [INFO][3917] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" HandleID="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wtltt", "timestamp":"2026-03-11 02:25:33.390269758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004dd1e0)} Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.403 [INFO][3917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.423 [INFO][3917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.423 [INFO][3917] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.486 [INFO][3917] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.494 [INFO][3917] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.502 [INFO][3917] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.506 [INFO][3917] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.510 [INFO][3917] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.511 [INFO][3917] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.513 [INFO][3917] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.520 [INFO][3917] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.528 [INFO][3917] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.528 [INFO][3917] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" host="localhost" Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.528 [INFO][3917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:33.573775 containerd[1449]: 2026-03-11 02:25:33.528 [INFO][3917] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" HandleID="k8s-pod-network.177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.575540 containerd[1449]: 2026-03-11 02:25:33.532 [INFO][3890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtltt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"adc87660-fa32-4458-aba8-d62f16053a90", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wtltt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e9c80e5680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:33.575540 containerd[1449]: 2026-03-11 02:25:33.532 [INFO][3890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.575540 containerd[1449]: 2026-03-11 02:25:33.532 [INFO][3890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e9c80e5680 ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.575540 containerd[1449]: 2026-03-11 02:25:33.542 [INFO][3890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.575540 containerd[1449]: 2026-03-11 02:25:33.544 [INFO][3890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtltt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"adc87660-fa32-4458-aba8-d62f16053a90", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e", Pod:"csi-node-driver-wtltt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e9c80e5680", MAC:"c6:b7:60:10:c3:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:33.575540 containerd[1449]: 2026-03-11 02:25:33.568 [INFO][3890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e" Namespace="calico-system" Pod="csi-node-driver-wtltt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:33.585752 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:33.610005 containerd[1449]: time="2026-03-11T02:25:33.608437635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:33.610005 containerd[1449]: time="2026-03-11T02:25:33.608493599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:33.610005 containerd[1449]: time="2026-03-11T02:25:33.608506605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:33.610005 containerd[1449]: time="2026-03-11T02:25:33.608624052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:33.629236 containerd[1449]: time="2026-03-11T02:25:33.629155593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-685f947667-xs4f9,Uid:22c779a8-71f3-4720-a430-6dad918fffbd,Namespace:calico-system,Attempt:1,} returns sandbox id \"b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922\"" Mar 11 02:25:33.638246 systemd[1]: Started cri-containerd-177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e.scope - libcontainer container 177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e. Mar 11 02:25:33.641042 containerd[1449]: time="2026-03-11T02:25:33.640778405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 11 02:25:33.662029 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:33.681625 containerd[1449]: time="2026-03-11T02:25:33.681317020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtltt,Uid:adc87660-fa32-4458-aba8-d62f16053a90,Namespace:calico-system,Attempt:1,} returns sandbox id \"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e\"" Mar 11 02:25:34.012566 systemd[1]: Created slice kubepods-besteffort-pod07e8a9de_f53f_4aba_91bf_206307305f35.slice - libcontainer container kubepods-besteffort-pod07e8a9de_f53f_4aba_91bf_206307305f35.slice. Mar 11 02:25:34.109116 kernel: calico-node[4115]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 11 02:25:34.133204 kubelet[2504]: I0311 02:25:34.133055 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07e8a9de-f53f-4aba-91bf-206307305f35-whisker-backend-key-pair\") pod \"whisker-79f56774c-z6r6r\" (UID: \"07e8a9de-f53f-4aba-91bf-206307305f35\") " pod="calico-system/whisker-79f56774c-z6r6r" Mar 11 02:25:34.134477 kubelet[2504]: I0311 02:25:34.134302 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/07e8a9de-f53f-4aba-91bf-206307305f35-nginx-config\") pod \"whisker-79f56774c-z6r6r\" (UID: \"07e8a9de-f53f-4aba-91bf-206307305f35\") " pod="calico-system/whisker-79f56774c-z6r6r" Mar 11 02:25:34.134477 kubelet[2504]: I0311 02:25:34.134356 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbdf\" (UniqueName: \"kubernetes.io/projected/07e8a9de-f53f-4aba-91bf-206307305f35-kube-api-access-2mbdf\") pod \"whisker-79f56774c-z6r6r\" (UID: \"07e8a9de-f53f-4aba-91bf-206307305f35\") " pod="calico-system/whisker-79f56774c-z6r6r" Mar 11 02:25:34.134477 kubelet[2504]: I0311 02:25:34.134394 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e8a9de-f53f-4aba-91bf-206307305f35-whisker-ca-bundle\") pod \"whisker-79f56774c-z6r6r\" (UID: \"07e8a9de-f53f-4aba-91bf-206307305f35\") " pod="calico-system/whisker-79f56774c-z6r6r" Mar 11 02:25:34.335677 containerd[1449]: time="2026-03-11T02:25:34.335310380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f56774c-z6r6r,Uid:07e8a9de-f53f-4aba-91bf-206307305f35,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:34.737179 systemd-networkd[1375]: calicacb15246c8: Link UP Mar 11 02:25:34.738304 systemd-networkd[1375]: calicacb15246c8: Gained carrier Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.517 [INFO][4186] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79f56774c--z6r6r-eth0 whisker-79f56774c- calico-system 07e8a9de-f53f-4aba-91bf-206307305f35 987 0 2026-03-11 02:25:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79f56774c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79f56774c-z6r6r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicacb15246c8 [] [] }} ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.519 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.649 [INFO][4201] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" HandleID="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Workload="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.665 [INFO][4201] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" HandleID="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Workload="localhost-k8s-whisker--79f56774c--z6r6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79f56774c-z6r6r", "timestamp":"2026-03-11 02:25:34.649385479 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000140420)} Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.665 [INFO][4201] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.666 [INFO][4201] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.667 [INFO][4201] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.672 [INFO][4201] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.681 [INFO][4201] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.691 [INFO][4201] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.695 [INFO][4201] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.699 [INFO][4201] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.699 [INFO][4201] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.702 [INFO][4201] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2 Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.708 [INFO][4201] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.720 [INFO][4201] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.720 [INFO][4201] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" host="localhost" Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.721 [INFO][4201] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:34.764595 containerd[1449]: 2026-03-11 02:25:34.721 [INFO][4201] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" HandleID="k8s-pod-network.30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Workload="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.766542 containerd[1449]: 2026-03-11 02:25:34.727 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79f56774c--z6r6r-eth0", GenerateName:"whisker-79f56774c-", Namespace:"calico-system", SelfLink:"", UID:"07e8a9de-f53f-4aba-91bf-206307305f35", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f56774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79f56774c-z6r6r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicacb15246c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:34.766542 containerd[1449]: 2026-03-11 02:25:34.727 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.766542 containerd[1449]: 2026-03-11 02:25:34.727 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicacb15246c8 ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.766542 containerd[1449]: 2026-03-11 02:25:34.740 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.766542 containerd[1449]: 2026-03-11 02:25:34.741 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79f56774c--z6r6r-eth0", GenerateName:"whisker-79f56774c-", Namespace:"calico-system", SelfLink:"", UID:"07e8a9de-f53f-4aba-91bf-206307305f35", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f56774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2", Pod:"whisker-79f56774c-z6r6r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicacb15246c8", MAC:"86:52:bd:4b:14:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:34.766542 containerd[1449]: 2026-03-11 02:25:34.757 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2" Namespace="calico-system" Pod="whisker-79f56774c-z6r6r" WorkloadEndpoint="localhost-k8s-whisker--79f56774c--z6r6r-eth0" Mar 11 02:25:34.820931 containerd[1449]: time="2026-03-11T02:25:34.820354846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:34.820931 containerd[1449]: time="2026-03-11T02:25:34.820438185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:34.820931 containerd[1449]: time="2026-03-11T02:25:34.820449538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:34.820931 containerd[1449]: time="2026-03-11T02:25:34.820535290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:34.855694 systemd[1]: Started cri-containerd-30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2.scope - libcontainer container 30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2. Mar 11 02:25:34.878790 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:34.955748 containerd[1449]: time="2026-03-11T02:25:34.955625108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f56774c-z6r6r,Uid:07e8a9de-f53f-4aba-91bf-206307305f35,Namespace:calico-system,Attempt:0,} returns sandbox id \"30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2\"" Mar 11 02:25:34.977715 systemd-networkd[1375]: vxlan.calico: Link UP Mar 11 02:25:34.977728 systemd-networkd[1375]: vxlan.calico: Gained carrier Mar 11 02:25:35.347211 systemd-networkd[1375]: cali4e9c80e5680: Gained IPv6LL Mar 11 02:25:35.477175 systemd-networkd[1375]: caliaabb350c4da: Gained IPv6LL Mar 11 02:25:35.558503 kubelet[2504]: I0311 02:25:35.558376 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a540df5-4f10-4a1a-9034-8b8ac7db4bef" path="/var/lib/kubelet/pods/1a540df5-4f10-4a1a-9034-8b8ac7db4bef/volumes" Mar 11 02:25:36.043598 containerd[1449]: time="2026-03-11T02:25:36.043367923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:36.045215 containerd[1449]: time="2026-03-11T02:25:36.045012700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 11 02:25:36.046614 containerd[1449]: time="2026-03-11T02:25:36.046521287Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:36.050572 containerd[1449]: time="2026-03-11T02:25:36.050441720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:36.051161 containerd[1449]: time="2026-03-11T02:25:36.050900091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.41005438s" Mar 11 02:25:36.051161 containerd[1449]: time="2026-03-11T02:25:36.051084401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 11 02:25:36.053220 containerd[1449]: time="2026-03-11T02:25:36.052867832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 11 02:25:36.073919 containerd[1449]: time="2026-03-11T02:25:36.073849683Z" level=info msg="CreateContainer within sandbox \"b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 11 02:25:36.095791 containerd[1449]: time="2026-03-11T02:25:36.095639455Z" level=info msg="CreateContainer within sandbox \"b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"365c4285b77a7db137ab9f5186533e2cfe533179e3705072783c1c777c476a3e\"" Mar 11 02:25:36.098711 containerd[1449]: time="2026-03-11T02:25:36.096665384Z" level=info msg="StartContainer for \"365c4285b77a7db137ab9f5186533e2cfe533179e3705072783c1c777c476a3e\"" Mar 11 02:25:36.150298 systemd[1]: Started cri-containerd-365c4285b77a7db137ab9f5186533e2cfe533179e3705072783c1c777c476a3e.scope - libcontainer container 365c4285b77a7db137ab9f5186533e2cfe533179e3705072783c1c777c476a3e. Mar 11 02:25:36.182410 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Mar 11 02:25:36.228544 containerd[1449]: time="2026-03-11T02:25:36.228449212Z" level=info msg="StartContainer for \"365c4285b77a7db137ab9f5186533e2cfe533179e3705072783c1c777c476a3e\" returns successfully" Mar 11 02:25:36.243511 systemd-networkd[1375]: calicacb15246c8: Gained IPv6LL Mar 11 02:25:36.767048 containerd[1449]: time="2026-03-11T02:25:36.766907502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:36.768515 containerd[1449]: time="2026-03-11T02:25:36.768436355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 11 02:25:36.769838 containerd[1449]: time="2026-03-11T02:25:36.769744949Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:36.772902 containerd[1449]: time="2026-03-11T02:25:36.772816136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:36.773917 containerd[1449]: time="2026-03-11T02:25:36.773816487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 720.91129ms" Mar 11 02:25:36.774048 containerd[1449]: time="2026-03-11T02:25:36.774025156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 11 02:25:36.777291 containerd[1449]: time="2026-03-11T02:25:36.775342509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 11 02:25:36.781661 containerd[1449]: time="2026-03-11T02:25:36.781604558Z" level=info msg="CreateContainer within sandbox \"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 11 02:25:36.807894 containerd[1449]: time="2026-03-11T02:25:36.807798053Z" level=info msg="CreateContainer within sandbox \"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b02d41ffea5fbb0e7417bf44229dbfc2f398f409c41f86f33f876d4ac38a621c\"" Mar 11 02:25:36.808928 containerd[1449]: time="2026-03-11T02:25:36.808862385Z" level=info msg="StartContainer for \"b02d41ffea5fbb0e7417bf44229dbfc2f398f409c41f86f33f876d4ac38a621c\"" Mar 11 02:25:36.858273 systemd[1]: Started cri-containerd-b02d41ffea5fbb0e7417bf44229dbfc2f398f409c41f86f33f876d4ac38a621c.scope - libcontainer container b02d41ffea5fbb0e7417bf44229dbfc2f398f409c41f86f33f876d4ac38a621c. Mar 11 02:25:37.001651 containerd[1449]: time="2026-03-11T02:25:37.001612045Z" level=info msg="StartContainer for \"b02d41ffea5fbb0e7417bf44229dbfc2f398f409c41f86f33f876d4ac38a621c\" returns successfully" Mar 11 02:25:37.008831 kubelet[2504]: I0311 02:25:37.005432 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-685f947667-xs4f9" podStartSLOduration=21.590249605 podStartE2EDuration="24.00541624s" podCreationTimestamp="2026-03-11 02:25:13 +0000 UTC" firstStartedPulling="2026-03-11 02:25:33.637522359 +0000 UTC m=+38.507131104" lastFinishedPulling="2026-03-11 02:25:36.052688993 +0000 UTC m=+40.922297739" observedRunningTime="2026-03-11 02:25:37.001120265 +0000 UTC m=+41.870729041" watchObservedRunningTime="2026-03-11 02:25:37.00541624 +0000 UTC m=+41.875024985" Mar 11 02:25:37.616547 containerd[1449]: time="2026-03-11T02:25:37.616370376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:37.617486 containerd[1449]: time="2026-03-11T02:25:37.617435472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 11 02:25:37.620026 containerd[1449]: time="2026-03-11T02:25:37.619865573Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:37.623924 containerd[1449]: time="2026-03-11T02:25:37.623824652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:37.625543 containerd[1449]: time="2026-03-11T02:25:37.625364915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 849.992446ms" Mar 11 02:25:37.625543 containerd[1449]: time="2026-03-11T02:25:37.625453764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 11 02:25:37.629390 containerd[1449]: time="2026-03-11T02:25:37.628696532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 11 02:25:37.639106 containerd[1449]: time="2026-03-11T02:25:37.638907637Z" level=info msg="CreateContainer within sandbox \"30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 11 02:25:37.664296 containerd[1449]: time="2026-03-11T02:25:37.664157384Z" level=info msg="CreateContainer within sandbox \"30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fb46c3634cca3c0fcaaf61da3c2ad73f614429813c2d98204aba28d766776b9e\"" Mar 11 02:25:37.665609 containerd[1449]: time="2026-03-11T02:25:37.665551248Z" level=info msg="StartContainer for \"fb46c3634cca3c0fcaaf61da3c2ad73f614429813c2d98204aba28d766776b9e\"" Mar 11 02:25:37.727567 systemd[1]: Started cri-containerd-fb46c3634cca3c0fcaaf61da3c2ad73f614429813c2d98204aba28d766776b9e.scope - libcontainer container fb46c3634cca3c0fcaaf61da3c2ad73f614429813c2d98204aba28d766776b9e. Mar 11 02:25:37.798441 containerd[1449]: time="2026-03-11T02:25:37.798364746Z" level=info msg="StartContainer for \"fb46c3634cca3c0fcaaf61da3c2ad73f614429813c2d98204aba28d766776b9e\" returns successfully" Mar 11 02:25:38.444485 containerd[1449]: time="2026-03-11T02:25:38.444360309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:38.445900 containerd[1449]: time="2026-03-11T02:25:38.445749720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 11 02:25:38.447410 containerd[1449]: time="2026-03-11T02:25:38.447297800Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:38.450862 containerd[1449]: time="2026-03-11T02:25:38.450738137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:38.452200 containerd[1449]: time="2026-03-11T02:25:38.452093873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 823.358235ms" Mar 11 02:25:38.452200 containerd[1449]: time="2026-03-11T02:25:38.452177922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 11 02:25:38.454437 containerd[1449]: time="2026-03-11T02:25:38.454350350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 11 02:25:38.460332 containerd[1449]: time="2026-03-11T02:25:38.460263306Z" level=info msg="CreateContainer within sandbox \"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 11 02:25:38.482960 containerd[1449]: time="2026-03-11T02:25:38.482867803Z" level=info msg="CreateContainer within sandbox \"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"75ee9e6e2f1da45459a6917d03466b31c2c5f7194c9d7b990da921fed4d8011c\"" Mar 11 02:25:38.484058 containerd[1449]: time="2026-03-11T02:25:38.484030496Z" level=info msg="StartContainer for \"75ee9e6e2f1da45459a6917d03466b31c2c5f7194c9d7b990da921fed4d8011c\"" Mar 11 02:25:38.533104 systemd[1]: run-containerd-runc-k8s.io-75ee9e6e2f1da45459a6917d03466b31c2c5f7194c9d7b990da921fed4d8011c-runc.HWlM9a.mount: Deactivated successfully. Mar 11 02:25:38.541225 systemd[1]: Started cri-containerd-75ee9e6e2f1da45459a6917d03466b31c2c5f7194c9d7b990da921fed4d8011c.scope - libcontainer container 75ee9e6e2f1da45459a6917d03466b31c2c5f7194c9d7b990da921fed4d8011c. Mar 11 02:25:38.589220 containerd[1449]: time="2026-03-11T02:25:38.588923478Z" level=info msg="StartContainer for \"75ee9e6e2f1da45459a6917d03466b31c2c5f7194c9d7b990da921fed4d8011c\" returns successfully" Mar 11 02:25:38.732704 kubelet[2504]: I0311 02:25:38.732139 2504 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 11 02:25:38.734116 kubelet[2504]: I0311 02:25:38.733528 2504 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 11 02:25:39.022132 kubelet[2504]: I0311 02:25:39.021635 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wtltt" podStartSLOduration=21.252244662 podStartE2EDuration="26.021396186s" podCreationTimestamp="2026-03-11 02:25:13 +0000 UTC" firstStartedPulling="2026-03-11 02:25:33.684591347 +0000 UTC m=+38.554200092" lastFinishedPulling="2026-03-11 02:25:38.453742871 +0000 UTC m=+43.323351616" observedRunningTime="2026-03-11 02:25:39.020436407 +0000 UTC m=+43.890045202" watchObservedRunningTime="2026-03-11 02:25:39.021396186 +0000 UTC m=+43.891004992" Mar 11 02:25:39.462168 containerd[1449]: time="2026-03-11T02:25:39.461785391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:39.463359 containerd[1449]: time="2026-03-11T02:25:39.463242885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 11 02:25:39.465188 containerd[1449]: time="2026-03-11T02:25:39.465054489Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:39.469092 containerd[1449]: time="2026-03-11T02:25:39.468927164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:39.469912 containerd[1449]: time="2026-03-11T02:25:39.469850766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.015319614s" Mar 11 02:25:39.469912 containerd[1449]: time="2026-03-11T02:25:39.469889033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 11 02:25:39.479158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030000996.mount: Deactivated successfully. Mar 11 02:25:39.481413 containerd[1449]: time="2026-03-11T02:25:39.481370283Z" level=info msg="CreateContainer within sandbox \"30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 11 02:25:39.511389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820898335.mount: Deactivated successfully. Mar 11 02:25:39.517063 containerd[1449]: time="2026-03-11T02:25:39.516798977Z" level=info msg="CreateContainer within sandbox \"30f4b03f70c4144453a0e4e0a6ba9b76917a1c66c3d192b2ceec8b50255144b2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d525928387ddf964a5b78378ab667a239756cc074be109789f1a3d2494fa3ebb\"" Mar 11 02:25:39.519058 containerd[1449]: time="2026-03-11T02:25:39.517757303Z" level=info msg="StartContainer for \"d525928387ddf964a5b78378ab667a239756cc074be109789f1a3d2494fa3ebb\"" Mar 11 02:25:39.580260 systemd[1]: Started cri-containerd-d525928387ddf964a5b78378ab667a239756cc074be109789f1a3d2494fa3ebb.scope - libcontainer container d525928387ddf964a5b78378ab667a239756cc074be109789f1a3d2494fa3ebb. Mar 11 02:25:39.655900 containerd[1449]: time="2026-03-11T02:25:39.655780530Z" level=info msg="StartContainer for \"d525928387ddf964a5b78378ab667a239756cc074be109789f1a3d2494fa3ebb\" returns successfully" Mar 11 02:25:40.476617 systemd[1]: run-containerd-runc-k8s.io-d525928387ddf964a5b78378ab667a239756cc074be109789f1a3d2494fa3ebb-runc.QygZx5.mount: Deactivated successfully. Mar 11 02:25:42.559498 containerd[1449]: time="2026-03-11T02:25:42.559292795Z" level=info msg="StopPodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\"" Mar 11 02:25:42.675587 kubelet[2504]: I0311 02:25:42.674300 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79f56774c-z6r6r" podStartSLOduration=5.174215854 podStartE2EDuration="9.674277979s" podCreationTimestamp="2026-03-11 02:25:33 +0000 UTC" firstStartedPulling="2026-03-11 02:25:34.971277144 +0000 UTC m=+39.840885889" lastFinishedPulling="2026-03-11 02:25:39.471339269 +0000 UTC m=+44.340948014" observedRunningTime="2026-03-11 02:25:40.02393772 +0000 UTC m=+44.893546464" watchObservedRunningTime="2026-03-11 02:25:42.674277979 +0000 UTC m=+47.543886754" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.674 [INFO][4622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.674 [INFO][4622] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" iface="eth0" netns="/var/run/netns/cni-afdb6e76-3941-2d6e-9e51-3773d4aeeab2" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.676 [INFO][4622] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" iface="eth0" netns="/var/run/netns/cni-afdb6e76-3941-2d6e-9e51-3773d4aeeab2" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.678 [INFO][4622] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" iface="eth0" netns="/var/run/netns/cni-afdb6e76-3941-2d6e-9e51-3773d4aeeab2" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.678 [INFO][4622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.678 [INFO][4622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.726 [INFO][4630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.726 [INFO][4630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.726 [INFO][4630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.742 [WARNING][4630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.742 [INFO][4630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.744 [INFO][4630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:42.750140 containerd[1449]: 2026-03-11 02:25:42.747 [INFO][4622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:42.751087 containerd[1449]: time="2026-03-11T02:25:42.750888866Z" level=info msg="TearDown network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" successfully" Mar 11 02:25:42.751087 containerd[1449]: time="2026-03-11T02:25:42.751050880Z" level=info msg="StopPodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" returns successfully" Mar 11 02:25:42.753579 systemd[1]: run-netns-cni\x2dafdb6e76\x2d3941\x2d2d6e\x2d9e51\x2d3773d4aeeab2.mount: Deactivated successfully. Mar 11 02:25:42.756284 kubelet[2504]: E0311 02:25:42.756090 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:42.756541 containerd[1449]: time="2026-03-11T02:25:42.756510213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pgwpq,Uid:1d28eda4-868c-4032-bb0a-0cda62dbcd9a,Namespace:kube-system,Attempt:1,}" Mar 11 02:25:42.948187 systemd-networkd[1375]: calia2ba5f3fda8: Link UP Mar 11 02:25:42.949853 systemd-networkd[1375]: calia2ba5f3fda8: Gained carrier Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.826 [INFO][4638] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--pgwpq-eth0 coredns-66bc5c9577- kube-system 1d28eda4-868c-4032-bb0a-0cda62dbcd9a 1043 0 2026-03-11 02:25:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-pgwpq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2ba5f3fda8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.826 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.882 [INFO][4652] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" HandleID="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.892 [INFO][4652] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" HandleID="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004821b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-pgwpq", "timestamp":"2026-03-11 02:25:42.88241893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000193340)} Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.892 [INFO][4652] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.892 [INFO][4652] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.892 [INFO][4652] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.896 [INFO][4652] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.903 [INFO][4652] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.910 [INFO][4652] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.915 [INFO][4652] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.919 [INFO][4652] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.919 [INFO][4652] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.921 [INFO][4652] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.927 [INFO][4652] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.938 [INFO][4652] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.938 [INFO][4652] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" host="localhost" Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.938 [INFO][4652] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:42.972609 containerd[1449]: 2026-03-11 02:25:42.938 [INFO][4652] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" HandleID="k8s-pod-network.da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.973729 containerd[1449]: 2026-03-11 02:25:42.943 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pgwpq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d28eda4-868c-4032-bb0a-0cda62dbcd9a", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-pgwpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ba5f3fda8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:42.973729 containerd[1449]: 2026-03-11 02:25:42.944 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.973729 containerd[1449]: 2026-03-11 02:25:42.944 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2ba5f3fda8 ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.973729 containerd[1449]: 2026-03-11 02:25:42.952 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:42.973729 containerd[1449]: 2026-03-11 02:25:42.953 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pgwpq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d28eda4-868c-4032-bb0a-0cda62dbcd9a", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd", Pod:"coredns-66bc5c9577-pgwpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ba5f3fda8", MAC:"92:f2:dd:18:32:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:42.973729 containerd[1449]: 2026-03-11 02:25:42.966 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd" Namespace="kube-system" Pod="coredns-66bc5c9577-pgwpq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:43.018248 containerd[1449]: time="2026-03-11T02:25:43.017579820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:43.018248 containerd[1449]: time="2026-03-11T02:25:43.017678566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:43.018248 containerd[1449]: time="2026-03-11T02:25:43.017698386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:43.018248 containerd[1449]: time="2026-03-11T02:25:43.017797393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:43.071264 systemd[1]: Started cri-containerd-da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd.scope - libcontainer container da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd. Mar 11 02:25:43.095189 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:43.157091 containerd[1449]: time="2026-03-11T02:25:43.156930041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pgwpq,Uid:1d28eda4-868c-4032-bb0a-0cda62dbcd9a,Namespace:kube-system,Attempt:1,} returns sandbox id \"da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd\"" Mar 11 02:25:43.159768 kubelet[2504]: E0311 02:25:43.158673 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:43.166878 containerd[1449]: time="2026-03-11T02:25:43.166641950Z" level=info msg="CreateContainer within sandbox \"da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:25:43.194915 containerd[1449]: time="2026-03-11T02:25:43.194775488Z" level=info msg="CreateContainer within sandbox \"da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b32213aab1657949f195ec5b7126ab1cb71232e05515aa7596ba418bd68229c9\"" Mar 11 02:25:43.195947 containerd[1449]: time="2026-03-11T02:25:43.195879892Z" level=info msg="StartContainer for \"b32213aab1657949f195ec5b7126ab1cb71232e05515aa7596ba418bd68229c9\"" Mar 11 02:25:43.251440 systemd[1]: Started cri-containerd-b32213aab1657949f195ec5b7126ab1cb71232e05515aa7596ba418bd68229c9.scope - libcontainer container b32213aab1657949f195ec5b7126ab1cb71232e05515aa7596ba418bd68229c9. Mar 11 02:25:43.299578 containerd[1449]: time="2026-03-11T02:25:43.299538715Z" level=info msg="StartContainer for \"b32213aab1657949f195ec5b7126ab1cb71232e05515aa7596ba418bd68229c9\" returns successfully" Mar 11 02:25:43.757328 systemd[1]: run-containerd-runc-k8s.io-da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd-runc.LVfKqy.mount: Deactivated successfully. Mar 11 02:25:44.022486 kubelet[2504]: E0311 02:25:44.021546 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:44.044511 kubelet[2504]: I0311 02:25:44.043703 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pgwpq" podStartSLOduration=43.04368719 podStartE2EDuration="43.04368719s" podCreationTimestamp="2026-03-11 02:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:25:44.042830687 +0000 UTC m=+48.912439453" watchObservedRunningTime="2026-03-11 02:25:44.04368719 +0000 UTC m=+48.913295935" Mar 11 02:25:44.307390 systemd-networkd[1375]: calia2ba5f3fda8: Gained IPv6LL Mar 11 02:25:44.552581 containerd[1449]: time="2026-03-11T02:25:44.552450980Z" level=info msg="StopPodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\"" Mar 11 02:25:44.552581 containerd[1449]: time="2026-03-11T02:25:44.552576738Z" level=info msg="StopPodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\"" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.701 [INFO][4790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.702 [INFO][4790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" iface="eth0" netns="/var/run/netns/cni-11bb98c1-2fae-cf76-ab8f-a48b727842f6" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.702 [INFO][4790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" iface="eth0" netns="/var/run/netns/cni-11bb98c1-2fae-cf76-ab8f-a48b727842f6" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.703 [INFO][4790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" iface="eth0" netns="/var/run/netns/cni-11bb98c1-2fae-cf76-ab8f-a48b727842f6" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.703 [INFO][4790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.703 [INFO][4790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.770 [INFO][4816] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.770 [INFO][4816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.770 [INFO][4816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.785 [WARNING][4816] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.785 [INFO][4816] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.789 [INFO][4816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:44.797915 containerd[1449]: 2026-03-11 02:25:44.793 [INFO][4790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:44.801879 containerd[1449]: time="2026-03-11T02:25:44.801712401Z" level=info msg="TearDown network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" successfully" Mar 11 02:25:44.801879 containerd[1449]: time="2026-03-11T02:25:44.801795827Z" level=info msg="StopPodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" returns successfully" Mar 11 02:25:44.805765 systemd[1]: run-netns-cni\x2d11bb98c1\x2d2fae\x2dcf76\x2dab8f\x2da48b727842f6.mount: Deactivated successfully. Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.684 [INFO][4795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.684 [INFO][4795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" iface="eth0" netns="/var/run/netns/cni-ec8e1b81-0179-4e51-22f5-59061b7833cc" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.684 [INFO][4795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" iface="eth0" netns="/var/run/netns/cni-ec8e1b81-0179-4e51-22f5-59061b7833cc" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.685 [INFO][4795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" iface="eth0" netns="/var/run/netns/cni-ec8e1b81-0179-4e51-22f5-59061b7833cc" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.685 [INFO][4795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.685 [INFO][4795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.770 [INFO][4810] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.772 [INFO][4810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.789 [INFO][4810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.801 [WARNING][4810] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.801 [INFO][4810] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.805 [INFO][4810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:44.814534 containerd[1449]: 2026-03-11 02:25:44.810 [INFO][4795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:44.816099 containerd[1449]: time="2026-03-11T02:25:44.815868252Z" level=info msg="TearDown network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" successfully" Mar 11 02:25:44.816099 containerd[1449]: time="2026-03-11T02:25:44.815927801Z" level=info msg="StopPodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" returns successfully" Mar 11 02:25:44.819166 systemd[1]: run-netns-cni\x2dec8e1b81\x2d0179\x2d4e51\x2d22f5\x2d59061b7833cc.mount: Deactivated successfully. Mar 11 02:25:44.870156 containerd[1449]: time="2026-03-11T02:25:44.870052805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-grck2,Uid:4d15dd84-e281-4325-b44c-bfd2cf49adb4,Namespace:calico-system,Attempt:1,}" Mar 11 02:25:44.873872 kubelet[2504]: E0311 02:25:44.873403 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:44.874406 containerd[1449]: time="2026-03-11T02:25:44.874126135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bcsbm,Uid:fbcb37ad-a949-4830-a43a-9cfdd14b9b96,Namespace:kube-system,Attempt:1,}" Mar 11 02:25:45.027625 kubelet[2504]: E0311 02:25:45.027539 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:45.181366 systemd-networkd[1375]: cali6e13a07d93b: Link UP Mar 11 02:25:45.183572 systemd-networkd[1375]: cali6e13a07d93b: Gained carrier Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.024 [INFO][4829] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0 calico-apiserver-c54d5dff8- calico-system 4d15dd84-e281-4325-b44c-bfd2cf49adb4 1073 0 2026-03-11 02:25:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c54d5dff8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c54d5dff8-grck2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6e13a07d93b [] [] }} ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.024 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.091 [INFO][4857] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" HandleID="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.109 [INFO][4857] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" HandleID="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-c54d5dff8-grck2", "timestamp":"2026-03-11 02:25:45.091741057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001b82c0)} Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.109 [INFO][4857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.109 [INFO][4857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.109 [INFO][4857] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.113 [INFO][4857] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.121 [INFO][4857] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.135 [INFO][4857] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.139 [INFO][4857] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.144 [INFO][4857] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.144 [INFO][4857] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.147 [INFO][4857] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00 Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.154 [INFO][4857] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.166 [INFO][4857] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.166 [INFO][4857] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" host="localhost" Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.166 [INFO][4857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:45.205535 containerd[1449]: 2026-03-11 02:25:45.166 [INFO][4857] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" HandleID="k8s-pod-network.f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.206790 containerd[1449]: 2026-03-11 02:25:45.172 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"4d15dd84-e281-4325-b44c-bfd2cf49adb4", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c54d5dff8-grck2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e13a07d93b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:45.206790 containerd[1449]: 2026-03-11 02:25:45.172 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.206790 containerd[1449]: 2026-03-11 02:25:45.172 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e13a07d93b ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.206790 containerd[1449]: 2026-03-11 02:25:45.184 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.206790 containerd[1449]: 2026-03-11 02:25:45.185 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"4d15dd84-e281-4325-b44c-bfd2cf49adb4", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00", Pod:"calico-apiserver-c54d5dff8-grck2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e13a07d93b", MAC:"7a:73:84:57:c7:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:45.206790 containerd[1449]: 2026-03-11 02:25:45.200 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-grck2" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:45.251147 containerd[1449]: time="2026-03-11T02:25:45.250574327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:45.251147 containerd[1449]: time="2026-03-11T02:25:45.250675337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:45.251147 containerd[1449]: time="2026-03-11T02:25:45.250691730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:45.252916 containerd[1449]: time="2026-03-11T02:25:45.252768056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:45.308398 systemd[1]: Started cri-containerd-f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00.scope - libcontainer container f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00. Mar 11 02:25:45.344543 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:45.346692 systemd-networkd[1375]: cali4572aed9605: Link UP Mar 11 02:25:45.351738 systemd-networkd[1375]: cali4572aed9605: Gained carrier Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.021 [INFO][4833] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--bcsbm-eth0 coredns-66bc5c9577- kube-system fbcb37ad-a949-4830-a43a-9cfdd14b9b96 1072 0 2026-03-11 02:25:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-bcsbm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4572aed9605 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.021 [INFO][4833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.099 [INFO][4855] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" HandleID="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.110 [INFO][4855] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" HandleID="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000400330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-bcsbm", "timestamp":"2026-03-11 02:25:45.09990832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00021c6e0)} Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.110 [INFO][4855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.167 [INFO][4855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.167 [INFO][4855] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.216 [INFO][4855] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.229 [INFO][4855] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.250 [INFO][4855] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.256 [INFO][4855] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.266 [INFO][4855] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.269 [INFO][4855] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.277 [INFO][4855] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.286 [INFO][4855] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.302 [INFO][4855] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.302 [INFO][4855] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" host="localhost" Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.302 [INFO][4855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:45.378260 containerd[1449]: 2026-03-11 02:25:45.302 [INFO][4855] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" HandleID="k8s-pod-network.ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.379411 containerd[1449]: 2026-03-11 02:25:45.336 [INFO][4833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bcsbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbcb37ad-a949-4830-a43a-9cfdd14b9b96", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-bcsbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4572aed9605", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:45.379411 containerd[1449]: 2026-03-11 02:25:45.336 [INFO][4833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.379411 containerd[1449]: 2026-03-11 02:25:45.336 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4572aed9605 ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.379411 containerd[1449]: 2026-03-11 02:25:45.354 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.379411 containerd[1449]: 2026-03-11 02:25:45.355 [INFO][4833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bcsbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbcb37ad-a949-4830-a43a-9cfdd14b9b96", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f", Pod:"coredns-66bc5c9577-bcsbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4572aed9605", MAC:"e2:fe:96:99:1c:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:45.379411 containerd[1449]: 2026-03-11 02:25:45.372 [INFO][4833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f" Namespace="kube-system" Pod="coredns-66bc5c9577-bcsbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:45.411172 containerd[1449]: time="2026-03-11T02:25:45.410907765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-grck2,Uid:4d15dd84-e281-4325-b44c-bfd2cf49adb4,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00\"" Mar 11 02:25:45.423495 containerd[1449]: time="2026-03-11T02:25:45.423381642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 02:25:45.441671 containerd[1449]: time="2026-03-11T02:25:45.440944090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:45.441671 containerd[1449]: time="2026-03-11T02:25:45.441199057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:45.441671 containerd[1449]: time="2026-03-11T02:25:45.441227072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:45.441671 containerd[1449]: time="2026-03-11T02:25:45.441427700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:45.481588 systemd[1]: Started cri-containerd-ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f.scope - libcontainer container ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f. Mar 11 02:25:45.506213 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:45.558919 containerd[1449]: time="2026-03-11T02:25:45.558473120Z" level=info msg="StopPodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\"" Mar 11 02:25:45.565179 containerd[1449]: time="2026-03-11T02:25:45.563464744Z" level=info msg="StopPodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\"" Mar 11 02:25:45.598130 containerd[1449]: time="2026-03-11T02:25:45.597792454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bcsbm,Uid:fbcb37ad-a949-4830-a43a-9cfdd14b9b96,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f\"" Mar 11 02:25:45.601889 kubelet[2504]: E0311 02:25:45.601689 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:45.612243 containerd[1449]: time="2026-03-11T02:25:45.611415232Z" level=info msg="CreateContainer within sandbox \"ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:25:45.660825 containerd[1449]: time="2026-03-11T02:25:45.660775523Z" level=info msg="CreateContainer within sandbox \"ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c44d7969c3594ef44aa5f22cc8c26a2d7f93b59b7720b4aed52f7f82ff605b23\"" Mar 11 02:25:45.665148 containerd[1449]: time="2026-03-11T02:25:45.664876717Z" level=info msg="StartContainer for \"c44d7969c3594ef44aa5f22cc8c26a2d7f93b59b7720b4aed52f7f82ff605b23\"" Mar 11 02:25:45.739205 systemd[1]: Started cri-containerd-c44d7969c3594ef44aa5f22cc8c26a2d7f93b59b7720b4aed52f7f82ff605b23.scope - libcontainer container c44d7969c3594ef44aa5f22cc8c26a2d7f93b59b7720b4aed52f7f82ff605b23. Mar 11 02:25:45.802925 containerd[1449]: time="2026-03-11T02:25:45.802738765Z" level=info msg="StartContainer for \"c44d7969c3594ef44aa5f22cc8c26a2d7f93b59b7720b4aed52f7f82ff605b23\" returns successfully" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.706 [INFO][5029] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.710 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" iface="eth0" netns="/var/run/netns/cni-4218fdca-ba2e-0a16-a4ad-abf91e016286" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.711 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" iface="eth0" netns="/var/run/netns/cni-4218fdca-ba2e-0a16-a4ad-abf91e016286" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.712 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" iface="eth0" netns="/var/run/netns/cni-4218fdca-ba2e-0a16-a4ad-abf91e016286" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.713 [INFO][5029] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.714 [INFO][5029] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.781 [INFO][5056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.781 [INFO][5056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.781 [INFO][5056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.802 [WARNING][5056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.802 [INFO][5056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.811 [INFO][5056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:45.830271 containerd[1449]: 2026-03-11 02:25:45.818 [INFO][5029] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:45.836631 systemd[1]: run-netns-cni\x2d4218fdca\x2dba2e\x2d0a16\x2da4ad\x2dabf91e016286.mount: Deactivated successfully. Mar 11 02:25:45.839654 containerd[1449]: time="2026-03-11T02:25:45.838048683Z" level=info msg="TearDown network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" successfully" Mar 11 02:25:45.839654 containerd[1449]: time="2026-03-11T02:25:45.838084374Z" level=info msg="StopPodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" returns successfully" Mar 11 02:25:45.848133 containerd[1449]: time="2026-03-11T02:25:45.848044340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-bhbz4,Uid:98451933-bb15-4d85-b793-b6047852572d,Namespace:calico-system,Attempt:1,}" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.719 [INFO][5023] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.719 [INFO][5023] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" iface="eth0" netns="/var/run/netns/cni-ee8b64bd-1ca4-4f88-888e-af30ce5275db" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.720 [INFO][5023] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" iface="eth0" netns="/var/run/netns/cni-ee8b64bd-1ca4-4f88-888e-af30ce5275db" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.721 [INFO][5023] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" iface="eth0" netns="/var/run/netns/cni-ee8b64bd-1ca4-4f88-888e-af30ce5275db" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.721 [INFO][5023] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.721 [INFO][5023] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.805 [INFO][5061] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.805 [INFO][5061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.810 [INFO][5061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.825 [WARNING][5061] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.825 [INFO][5061] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.837 [INFO][5061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:45.853320 containerd[1449]: 2026-03-11 02:25:45.848 [INFO][5023] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:45.854418 containerd[1449]: time="2026-03-11T02:25:45.853942024Z" level=info msg="TearDown network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" successfully" Mar 11 02:25:45.854418 containerd[1449]: time="2026-03-11T02:25:45.854145908Z" level=info msg="StopPodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" returns successfully" Mar 11 02:25:45.860869 systemd[1]: run-netns-cni\x2dee8b64bd\x2d1ca4\x2d4f88\x2d888e\x2daf30ce5275db.mount: Deactivated successfully. Mar 11 02:25:45.865424 containerd[1449]: time="2026-03-11T02:25:45.864879120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-67dgv,Uid:3e4ab64e-7f0b-42f6-95eb-a75c21c49b91,Namespace:calico-system,Attempt:1,}" Mar 11 02:25:46.041529 kubelet[2504]: E0311 02:25:46.041153 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:46.114355 kubelet[2504]: E0311 02:25:46.114042 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:46.137360 kubelet[2504]: I0311 02:25:46.137241 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bcsbm" podStartSLOduration=45.137219113 podStartE2EDuration="45.137219113s" podCreationTimestamp="2026-03-11 02:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:25:46.076217093 +0000 UTC m=+50.945825878" watchObservedRunningTime="2026-03-11 02:25:46.137219113 +0000 UTC m=+51.006827859" Mar 11 02:25:46.384765 systemd-networkd[1375]: cali2483aa05959: Link UP Mar 11 02:25:46.388940 systemd-networkd[1375]: cali2483aa05959: Gained carrier Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:45.996 [INFO][5095] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0 calico-apiserver-c54d5dff8- calico-system 98451933-bb15-4d85-b793-b6047852572d 1090 0 2026-03-11 02:25:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c54d5dff8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c54d5dff8-bhbz4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2483aa05959 [] [] }} ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:45.997 [INFO][5095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.143 [INFO][5124] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" HandleID="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.182 [INFO][5124] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" HandleID="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-c54d5dff8-bhbz4", "timestamp":"2026-03-11 02:25:46.143902969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000668b00)} Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.182 [INFO][5124] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.183 [INFO][5124] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.183 [INFO][5124] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.208 [INFO][5124] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.248 [INFO][5124] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.280 [INFO][5124] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.299 [INFO][5124] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.310 [INFO][5124] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.311 [INFO][5124] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.318 [INFO][5124] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.335 [INFO][5124] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.355 [INFO][5124] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.357 [INFO][5124] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" host="localhost" Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.357 [INFO][5124] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:46.454201 containerd[1449]: 2026-03-11 02:25:46.357 [INFO][5124] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" HandleID="k8s-pod-network.0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.456753 containerd[1449]: 2026-03-11 02:25:46.366 [INFO][5095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"98451933-bb15-4d85-b793-b6047852572d", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c54d5dff8-bhbz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2483aa05959", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:46.456753 containerd[1449]: 2026-03-11 02:25:46.366 [INFO][5095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.456753 containerd[1449]: 2026-03-11 02:25:46.367 [INFO][5095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2483aa05959 ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.456753 containerd[1449]: 2026-03-11 02:25:46.395 [INFO][5095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.456753 containerd[1449]: 2026-03-11 02:25:46.400 [INFO][5095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"98451933-bb15-4d85-b793-b6047852572d", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf", Pod:"calico-apiserver-c54d5dff8-bhbz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2483aa05959", MAC:"b2:de:32:c0:f1:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:46.456753 containerd[1449]: 2026-03-11 02:25:46.435 [INFO][5095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf" Namespace="calico-system" Pod="calico-apiserver-c54d5dff8-bhbz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:46.544942 systemd-networkd[1375]: cali66d3f6bca77: Link UP Mar 11 02:25:46.547754 systemd-networkd[1375]: cali66d3f6bca77: Gained carrier Mar 11 02:25:46.574090 containerd[1449]: time="2026-03-11T02:25:46.570590795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:46.574090 containerd[1449]: time="2026-03-11T02:25:46.573761726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:46.574090 containerd[1449]: time="2026-03-11T02:25:46.573782476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:46.575854 containerd[1449]: time="2026-03-11T02:25:46.575750361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.019 [INFO][5107] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0 goldmane-cccfbd5cf- calico-system 3e4ab64e-7f0b-42f6-95eb-a75c21c49b91 1091 0 2026-03-11 02:25:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-67dgv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali66d3f6bca77 [] [] }} ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.020 [INFO][5107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.169 [INFO][5131] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" HandleID="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.217 [INFO][5131] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" HandleID="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000420da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-67dgv", "timestamp":"2026-03-11 02:25:46.169071133 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001922c0)} Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.217 [INFO][5131] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.359 [INFO][5131] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.359 [INFO][5131] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.371 [INFO][5131] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.394 [INFO][5131] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.415 [INFO][5131] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.440 [INFO][5131] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.445 [INFO][5131] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.445 [INFO][5131] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.463 [INFO][5131] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.479 [INFO][5131] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.501 [INFO][5131] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.501 [INFO][5131] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" host="localhost" Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.501 [INFO][5131] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:46.599242 containerd[1449]: 2026-03-11 02:25:46.501 [INFO][5131] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" HandleID="k8s-pod-network.d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.600689 containerd[1449]: 2026-03-11 02:25:46.524 [INFO][5107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-67dgv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali66d3f6bca77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:46.600689 containerd[1449]: 2026-03-11 02:25:46.530 [INFO][5107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.600689 containerd[1449]: 2026-03-11 02:25:46.530 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66d3f6bca77 ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.600689 containerd[1449]: 2026-03-11 02:25:46.565 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.600689 containerd[1449]: 2026-03-11 02:25:46.568 [INFO][5107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e", Pod:"goldmane-cccfbd5cf-67dgv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali66d3f6bca77", MAC:"9e:6d:6e:28:69:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:46.600689 containerd[1449]: 2026-03-11 02:25:46.592 [INFO][5107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-67dgv" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:46.617227 systemd[1]: Started cri-containerd-0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf.scope - libcontainer container 0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf. Mar 11 02:25:46.660401 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:46.713179 containerd[1449]: time="2026-03-11T02:25:46.711686711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:46.713179 containerd[1449]: time="2026-03-11T02:25:46.711790647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:46.713179 containerd[1449]: time="2026-03-11T02:25:46.711812201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:46.713179 containerd[1449]: time="2026-03-11T02:25:46.712841412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:46.763437 containerd[1449]: time="2026-03-11T02:25:46.763388295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c54d5dff8-bhbz4,Uid:98451933-bb15-4d85-b793-b6047852572d,Namespace:calico-system,Attempt:1,} returns sandbox id \"0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf\"" Mar 11 02:25:46.773328 systemd[1]: Started cri-containerd-d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e.scope - libcontainer container d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e. Mar 11 02:25:46.811811 systemd-networkd[1375]: cali4572aed9605: Gained IPv6LL Mar 11 02:25:46.838388 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:25:46.930937 containerd[1449]: time="2026-03-11T02:25:46.929878267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-67dgv,Uid:3e4ab64e-7f0b-42f6-95eb-a75c21c49b91,Namespace:calico-system,Attempt:1,} returns sandbox id \"d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e\"" Mar 11 02:25:47.127116 kubelet[2504]: E0311 02:25:47.126513 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:47.189151 systemd-networkd[1375]: cali6e13a07d93b: Gained IPv6LL Mar 11 02:25:47.499175 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:34336.service - OpenSSH per-connection server daemon (10.0.0.1:34336). Mar 11 02:25:47.627558 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 34336 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:47.635364 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:47.649736 systemd-logind[1434]: New session 8 of user core. Mar 11 02:25:47.655165 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 02:25:47.699580 systemd-networkd[1375]: cali66d3f6bca77: Gained IPv6LL Mar 11 02:25:47.993354 sshd[5276]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:48.000484 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Mar 11 02:25:48.002911 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:34336.service: Deactivated successfully. Mar 11 02:25:48.008369 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 02:25:48.010551 systemd-logind[1434]: Removed session 8. Mar 11 02:25:48.132885 kubelet[2504]: E0311 02:25:48.131613 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:48.275311 systemd-networkd[1375]: cali2483aa05959: Gained IPv6LL Mar 11 02:25:48.647487 containerd[1449]: time="2026-03-11T02:25:48.647131877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:48.651129 containerd[1449]: time="2026-03-11T02:25:48.650897175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 11 02:25:48.654897 containerd[1449]: time="2026-03-11T02:25:48.654814355Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:48.669476 containerd[1449]: time="2026-03-11T02:25:48.669401675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:48.670331 containerd[1449]: time="2026-03-11T02:25:48.670263068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.246790936s" Mar 11 02:25:48.670331 containerd[1449]: time="2026-03-11T02:25:48.670325442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 02:25:48.672398 containerd[1449]: time="2026-03-11T02:25:48.672324030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 02:25:48.677449 containerd[1449]: time="2026-03-11T02:25:48.677272251Z" level=info msg="CreateContainer within sandbox \"f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 02:25:48.702335 containerd[1449]: time="2026-03-11T02:25:48.702211308Z" level=info msg="CreateContainer within sandbox \"f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af8959af9ee18f55343134200e34a266a39d440decbb3ee8b99b3c571a4c3194\"" Mar 11 02:25:48.704217 containerd[1449]: time="2026-03-11T02:25:48.704071929Z" level=info msg="StartContainer for \"af8959af9ee18f55343134200e34a266a39d440decbb3ee8b99b3c571a4c3194\"" Mar 11 02:25:48.786114 systemd[1]: run-containerd-runc-k8s.io-af8959af9ee18f55343134200e34a266a39d440decbb3ee8b99b3c571a4c3194-runc.JELKnO.mount: Deactivated successfully. Mar 11 02:25:48.805468 systemd[1]: Started cri-containerd-af8959af9ee18f55343134200e34a266a39d440decbb3ee8b99b3c571a4c3194.scope - libcontainer container af8959af9ee18f55343134200e34a266a39d440decbb3ee8b99b3c571a4c3194. Mar 11 02:25:48.816640 containerd[1449]: time="2026-03-11T02:25:48.816564951Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:48.818144 containerd[1449]: time="2026-03-11T02:25:48.818059652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 11 02:25:48.820623 containerd[1449]: time="2026-03-11T02:25:48.820531844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 148.182182ms" Mar 11 02:25:48.820623 containerd[1449]: time="2026-03-11T02:25:48.820599609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 02:25:48.823568 containerd[1449]: time="2026-03-11T02:25:48.823456085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 11 02:25:48.839376 containerd[1449]: time="2026-03-11T02:25:48.839260795Z" level=info msg="CreateContainer within sandbox \"0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 02:25:48.866195 containerd[1449]: time="2026-03-11T02:25:48.866151000Z" level=info msg="CreateContainer within sandbox \"0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fa9ede1a41ba80dde6d512204f4cc1945e90eb1064277e87ec573a20d0330e4f\"" Mar 11 02:25:48.867498 containerd[1449]: time="2026-03-11T02:25:48.867417523Z" level=info msg="StartContainer for \"fa9ede1a41ba80dde6d512204f4cc1945e90eb1064277e87ec573a20d0330e4f\"" Mar 11 02:25:48.949260 systemd[1]: Started cri-containerd-fa9ede1a41ba80dde6d512204f4cc1945e90eb1064277e87ec573a20d0330e4f.scope - libcontainer container fa9ede1a41ba80dde6d512204f4cc1945e90eb1064277e87ec573a20d0330e4f. Mar 11 02:25:49.002844 containerd[1449]: time="2026-03-11T02:25:49.001916515Z" level=info msg="StartContainer for \"af8959af9ee18f55343134200e34a266a39d440decbb3ee8b99b3c571a4c3194\" returns successfully" Mar 11 02:25:49.048900 containerd[1449]: time="2026-03-11T02:25:49.048627894Z" level=info msg="StartContainer for \"fa9ede1a41ba80dde6d512204f4cc1945e90eb1064277e87ec573a20d0330e4f\" returns successfully" Mar 11 02:25:49.213258 kubelet[2504]: I0311 02:25:49.213107 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-c54d5dff8-bhbz4" podStartSLOduration=35.157898895 podStartE2EDuration="37.213083941s" podCreationTimestamp="2026-03-11 02:25:12 +0000 UTC" firstStartedPulling="2026-03-11 02:25:46.767212623 +0000 UTC m=+51.636821368" lastFinishedPulling="2026-03-11 02:25:48.822397669 +0000 UTC m=+53.692006414" observedRunningTime="2026-03-11 02:25:49.190702182 +0000 UTC m=+54.060310927" watchObservedRunningTime="2026-03-11 02:25:49.213083941 +0000 UTC m=+54.082692686" Mar 11 02:25:50.159707 kubelet[2504]: I0311 02:25:50.159576 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:25:50.552529 kubelet[2504]: I0311 02:25:50.552341 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-c54d5dff8-grck2" podStartSLOduration=35.303102498 podStartE2EDuration="38.552318956s" podCreationTimestamp="2026-03-11 02:25:12 +0000 UTC" firstStartedPulling="2026-03-11 02:25:45.422263772 +0000 UTC m=+50.291872518" lastFinishedPulling="2026-03-11 02:25:48.671480232 +0000 UTC m=+53.541088976" observedRunningTime="2026-03-11 02:25:49.217941935 +0000 UTC m=+54.087550690" watchObservedRunningTime="2026-03-11 02:25:50.552318956 +0000 UTC m=+55.421927711" Mar 11 02:25:51.456305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136005935.mount: Deactivated successfully. Mar 11 02:25:52.200788 containerd[1449]: time="2026-03-11T02:25:52.200640988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:52.202229 containerd[1449]: time="2026-03-11T02:25:52.202055587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 11 02:25:52.228047 containerd[1449]: time="2026-03-11T02:25:52.227799388Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:52.230263 containerd[1449]: time="2026-03-11T02:25:52.229595957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.406097097s" Mar 11 02:25:52.230263 containerd[1449]: time="2026-03-11T02:25:52.229646568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 11 02:25:52.231261 containerd[1449]: time="2026-03-11T02:25:52.230803653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:25:52.254744 containerd[1449]: time="2026-03-11T02:25:52.254660755Z" level=info msg="CreateContainer within sandbox \"d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 11 02:25:52.278690 containerd[1449]: time="2026-03-11T02:25:52.278290397Z" level=info msg="CreateContainer within sandbox \"d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"94da5c0ad8a0e8af3fb223bec46c4e08aa349b6c877b2bde584b1575aae77ef9\"" Mar 11 02:25:52.281520 containerd[1449]: time="2026-03-11T02:25:52.281455829Z" level=info msg="StartContainer for \"94da5c0ad8a0e8af3fb223bec46c4e08aa349b6c877b2bde584b1575aae77ef9\"" Mar 11 02:25:52.371644 systemd[1]: Started cri-containerd-94da5c0ad8a0e8af3fb223bec46c4e08aa349b6c877b2bde584b1575aae77ef9.scope - libcontainer container 94da5c0ad8a0e8af3fb223bec46c4e08aa349b6c877b2bde584b1575aae77ef9. Mar 11 02:25:52.474825 containerd[1449]: time="2026-03-11T02:25:52.474394794Z" level=info msg="StartContainer for \"94da5c0ad8a0e8af3fb223bec46c4e08aa349b6c877b2bde584b1575aae77ef9\" returns successfully" Mar 11 02:25:53.009189 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:60224.service - OpenSSH per-connection server daemon (10.0.0.1:60224). Mar 11 02:25:53.151130 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 60224 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:53.154080 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:53.162663 systemd-logind[1434]: New session 9 of user core. Mar 11 02:25:53.171390 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 02:25:53.807894 sshd[5466]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:53.813654 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:60224.service: Deactivated successfully. Mar 11 02:25:53.819732 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 02:25:53.823766 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Mar 11 02:25:53.826837 systemd-logind[1434]: Removed session 9. Mar 11 02:25:55.469697 containerd[1449]: time="2026-03-11T02:25:55.469257980Z" level=info msg="StopPodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\"" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.600 [WARNING][5545] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e", Pod:"goldmane-cccfbd5cf-67dgv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali66d3f6bca77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.605 [INFO][5545] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.605 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" iface="eth0" netns="" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.605 [INFO][5545] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.605 [INFO][5545] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.795 [INFO][5565] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.796 [INFO][5565] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.798 [INFO][5565] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.812 [WARNING][5565] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.812 [INFO][5565] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.816 [INFO][5565] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:55.832393 containerd[1449]: 2026-03-11 02:25:55.821 [INFO][5545] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:55.835254 containerd[1449]: time="2026-03-11T02:25:55.833076934Z" level=info msg="TearDown network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" successfully" Mar 11 02:25:55.835254 containerd[1449]: time="2026-03-11T02:25:55.833117173Z" level=info msg="StopPodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" returns successfully" Mar 11 02:25:55.918736 containerd[1449]: time="2026-03-11T02:25:55.918611439Z" level=info msg="RemovePodSandbox for \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\"" Mar 11 02:25:55.923061 containerd[1449]: time="2026-03-11T02:25:55.922884446Z" level=info msg="Forcibly stopping sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\"" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.015 [WARNING][5585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3e4ab64e-7f0b-42f6-95eb-a75c21c49b91", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5b43f549da8d182a2d9fc71f765028b523fbe6da95aa85a3de66dafee8fee3e", Pod:"goldmane-cccfbd5cf-67dgv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali66d3f6bca77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.015 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.015 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" iface="eth0" netns="" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.016 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.016 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.076 [INFO][5593] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.076 [INFO][5593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.077 [INFO][5593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.091 [WARNING][5593] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.091 [INFO][5593] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" HandleID="k8s-pod-network.e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Workload="localhost-k8s-goldmane--cccfbd5cf--67dgv-eth0" Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.095 [INFO][5593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:56.111663 containerd[1449]: 2026-03-11 02:25:56.101 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28" Mar 11 02:25:56.111663 containerd[1449]: time="2026-03-11T02:25:56.107297235Z" level=info msg="TearDown network for sandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" successfully" Mar 11 02:25:56.150098 containerd[1449]: time="2026-03-11T02:25:56.149896152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:56.150257 containerd[1449]: time="2026-03-11T02:25:56.150125203Z" level=info msg="RemovePodSandbox \"e48130abe0a1c16603f3a4dff7c75488e8b11e1926d8bfe44c9826b0a61eec28\" returns successfully" Mar 11 02:25:56.161728 containerd[1449]: time="2026-03-11T02:25:56.161475106Z" level=info msg="StopPodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\"" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.244 [WARNING][5610] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bcsbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbcb37ad-a949-4830-a43a-9cfdd14b9b96", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f", Pod:"coredns-66bc5c9577-bcsbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4572aed9605", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.245 [INFO][5610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.245 [INFO][5610] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" iface="eth0" netns="" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.245 [INFO][5610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.245 [INFO][5610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.310 [INFO][5619] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.310 [INFO][5619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.310 [INFO][5619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.321 [WARNING][5619] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.322 [INFO][5619] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.325 [INFO][5619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:56.345795 containerd[1449]: 2026-03-11 02:25:56.336 [INFO][5610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.345795 containerd[1449]: time="2026-03-11T02:25:56.345701000Z" level=info msg="TearDown network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" successfully" Mar 11 02:25:56.345795 containerd[1449]: time="2026-03-11T02:25:56.345737060Z" level=info msg="StopPodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" returns successfully" Mar 11 02:25:56.347373 containerd[1449]: time="2026-03-11T02:25:56.346946945Z" level=info msg="RemovePodSandbox for \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\"" Mar 11 02:25:56.347373 containerd[1449]: time="2026-03-11T02:25:56.347110919Z" level=info msg="Forcibly stopping sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\"" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.444 [WARNING][5636] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bcsbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbcb37ad-a949-4830-a43a-9cfdd14b9b96", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea6824b8b30ebd9bd9caffa904fe0b602b5fd3335129619680d4996e09a6c70f", Pod:"coredns-66bc5c9577-bcsbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4572aed9605", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.445 [INFO][5636] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.445 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" iface="eth0" netns="" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.445 [INFO][5636] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.445 [INFO][5636] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.524 [INFO][5644] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.524 [INFO][5644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.524 [INFO][5644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.548 [WARNING][5644] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.548 [INFO][5644] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" HandleID="k8s-pod-network.f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Workload="localhost-k8s-coredns--66bc5c9577--bcsbm-eth0" Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.552 [INFO][5644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:56.569516 containerd[1449]: 2026-03-11 02:25:56.564 [INFO][5636] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c" Mar 11 02:25:56.569516 containerd[1449]: time="2026-03-11T02:25:56.569297808Z" level=info msg="TearDown network for sandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" successfully" Mar 11 02:25:56.624466 containerd[1449]: time="2026-03-11T02:25:56.623398401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:56.624466 containerd[1449]: time="2026-03-11T02:25:56.623492857Z" level=info msg="RemovePodSandbox \"f2a2fb667c11237033f41d1cf6fd7a28cee5be5217a410023944bfb16046de9c\" returns successfully" Mar 11 02:25:56.625230 containerd[1449]: time="2026-03-11T02:25:56.624885029Z" level=info msg="StopPodSandbox for \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\"" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.786 [WARNING][5666] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0", GenerateName:"calico-kube-controllers-685f947667-", Namespace:"calico-system", SelfLink:"", UID:"22c779a8-71f3-4720-a430-6dad918fffbd", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"685f947667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922", Pod:"calico-kube-controllers-685f947667-xs4f9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaabb350c4da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.787 [INFO][5666] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.787 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" iface="eth0" netns="" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.787 [INFO][5666] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.787 [INFO][5666] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.863 [INFO][5675] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.865 [INFO][5675] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.865 [INFO][5675] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.884 [WARNING][5675] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.884 [INFO][5675] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.888 [INFO][5675] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:56.900086 containerd[1449]: 2026-03-11 02:25:56.894 [INFO][5666] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:56.900086 containerd[1449]: time="2026-03-11T02:25:56.899921476Z" level=info msg="TearDown network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\" successfully" Mar 11 02:25:56.900086 containerd[1449]: time="2026-03-11T02:25:56.900058646Z" level=info msg="StopPodSandbox for \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\" returns successfully" Mar 11 02:25:56.902211 containerd[1449]: time="2026-03-11T02:25:56.902176191Z" level=info msg="RemovePodSandbox for \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\"" Mar 11 02:25:56.902374 containerd[1449]: time="2026-03-11T02:25:56.902215478Z" level=info msg="Forcibly stopping sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\"" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.027 [WARNING][5698] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0", GenerateName:"calico-kube-controllers-685f947667-", Namespace:"calico-system", SelfLink:"", UID:"22c779a8-71f3-4720-a430-6dad918fffbd", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"685f947667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b48523d0655c914e05ed3a29f01ddabe702615d3ab1b4bc2dec8cc7e1c92a922", Pod:"calico-kube-controllers-685f947667-xs4f9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaabb350c4da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.028 [INFO][5698] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.028 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" iface="eth0" netns="" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.030 [INFO][5698] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.048 [INFO][5698] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.109 [INFO][5707] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.109 [INFO][5707] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.111 [INFO][5707] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.150 [WARNING][5707] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.151 [INFO][5707] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" HandleID="k8s-pod-network.29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Workload="localhost-k8s-calico--kube--controllers--685f947667--xs4f9-eth0" Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.161 [INFO][5707] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:57.172177 containerd[1449]: 2026-03-11 02:25:57.166 [INFO][5698] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9" Mar 11 02:25:57.172177 containerd[1449]: time="2026-03-11T02:25:57.170542664Z" level=info msg="TearDown network for sandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\" successfully" Mar 11 02:25:57.256265 containerd[1449]: time="2026-03-11T02:25:57.255934968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:57.256265 containerd[1449]: time="2026-03-11T02:25:57.256161646Z" level=info msg="RemovePodSandbox \"29523ec3317dfd78e06f9cbc47852a3cb5db486f9ac33b470e6f2c5d3acd93e9\" returns successfully" Mar 11 02:25:57.257008 containerd[1449]: time="2026-03-11T02:25:57.256902330Z" level=info msg="StopPodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\"" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.350 [WARNING][5725] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" WorkloadEndpoint="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.351 [INFO][5725] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.351 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" iface="eth0" netns="" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.351 [INFO][5725] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.351 [INFO][5725] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.407 [INFO][5733] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.410 [INFO][5733] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.410 [INFO][5733] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.423 [WARNING][5733] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.423 [INFO][5733] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.426 [INFO][5733] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:57.448803 containerd[1449]: 2026-03-11 02:25:57.431 [INFO][5725] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.448803 containerd[1449]: time="2026-03-11T02:25:57.448546070Z" level=info msg="TearDown network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" successfully" Mar 11 02:25:57.448803 containerd[1449]: time="2026-03-11T02:25:57.448583584Z" level=info msg="StopPodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" returns successfully" Mar 11 02:25:57.451529 containerd[1449]: time="2026-03-11T02:25:57.450208249Z" level=info msg="RemovePodSandbox for \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\"" Mar 11 02:25:57.451529 containerd[1449]: time="2026-03-11T02:25:57.450400428Z" level=info msg="Forcibly stopping sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\"" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.550 [WARNING][5749] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" WorkloadEndpoint="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.551 [INFO][5749] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.551 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" iface="eth0" netns="" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.551 [INFO][5749] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.551 [INFO][5749] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.620 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.620 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.621 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.640 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.640 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" HandleID="k8s-pod-network.8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Workload="localhost-k8s-whisker--5fd96b98bb--ttsjm-eth0" Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.651 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:57.667699 containerd[1449]: 2026-03-11 02:25:57.657 [INFO][5749] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978" Mar 11 02:25:57.668881 containerd[1449]: time="2026-03-11T02:25:57.667626021Z" level=info msg="TearDown network for sandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" successfully" Mar 11 02:25:57.680271 containerd[1449]: time="2026-03-11T02:25:57.680070267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:57.680271 containerd[1449]: time="2026-03-11T02:25:57.680200273Z" level=info msg="RemovePodSandbox \"8e91ac4a628251ba38a15f939cc9e2a32d56c0df71c2d640d7c71d7f73636978\" returns successfully" Mar 11 02:25:57.681629 containerd[1449]: time="2026-03-11T02:25:57.680918548Z" level=info msg="StopPodSandbox for \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\"" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.821 [WARNING][5775] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtltt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"adc87660-fa32-4458-aba8-d62f16053a90", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e", Pod:"csi-node-driver-wtltt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e9c80e5680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.823 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.823 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" iface="eth0" netns="" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.823 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.823 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.916 [INFO][5784] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.920 [INFO][5784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.921 [INFO][5784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.953 [WARNING][5784] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.953 [INFO][5784] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.965 [INFO][5784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:57.989068 containerd[1449]: 2026-03-11 02:25:57.973 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:57.989068 containerd[1449]: time="2026-03-11T02:25:57.988911941Z" level=info msg="TearDown network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\" successfully" Mar 11 02:25:57.989068 containerd[1449]: time="2026-03-11T02:25:57.989049512Z" level=info msg="StopPodSandbox for \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\" returns successfully" Mar 11 02:25:57.990632 containerd[1449]: time="2026-03-11T02:25:57.990577392Z" level=info msg="RemovePodSandbox for \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\"" Mar 11 02:25:57.990632 containerd[1449]: time="2026-03-11T02:25:57.990617964Z" level=info msg="Forcibly stopping sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\"" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.114 [WARNING][5802] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtltt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"adc87660-fa32-4458-aba8-d62f16053a90", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"177e430552e4cd15b5e4c436b4e465ab3bff974bfd43d1d4a96037b7494f635e", Pod:"csi-node-driver-wtltt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e9c80e5680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.114 [INFO][5802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.114 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" iface="eth0" netns="" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.114 [INFO][5802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.114 [INFO][5802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.185 [INFO][5811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.186 [INFO][5811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.186 [INFO][5811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.199 [WARNING][5811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.199 [INFO][5811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" HandleID="k8s-pod-network.7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Workload="localhost-k8s-csi--node--driver--wtltt-eth0" Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.202 [INFO][5811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:58.210469 containerd[1449]: 2026-03-11 02:25:58.206 [INFO][5802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28" Mar 11 02:25:58.210469 containerd[1449]: time="2026-03-11T02:25:58.210357117Z" level=info msg="TearDown network for sandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\" successfully" Mar 11 02:25:58.219590 containerd[1449]: time="2026-03-11T02:25:58.217929946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:58.219590 containerd[1449]: time="2026-03-11T02:25:58.218093058Z" level=info msg="RemovePodSandbox \"7adc1bf65826feccb5a7024e286bf9679544836dc840ec390400e5b866dcde28\" returns successfully" Mar 11 02:25:58.220147 containerd[1449]: time="2026-03-11T02:25:58.220077010Z" level=info msg="StopPodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\"" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.326 [WARNING][5828] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"98451933-bb15-4d85-b793-b6047852572d", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf", Pod:"calico-apiserver-c54d5dff8-bhbz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2483aa05959", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.326 [INFO][5828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.326 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" iface="eth0" netns="" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.326 [INFO][5828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.326 [INFO][5828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.372 [INFO][5836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.372 [INFO][5836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.372 [INFO][5836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.388 [WARNING][5836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.388 [INFO][5836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.394 [INFO][5836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:58.405058 containerd[1449]: 2026-03-11 02:25:58.401 [INFO][5828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.405745 containerd[1449]: time="2026-03-11T02:25:58.405070851Z" level=info msg="TearDown network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" successfully" Mar 11 02:25:58.405745 containerd[1449]: time="2026-03-11T02:25:58.405122543Z" level=info msg="StopPodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" returns successfully" Mar 11 02:25:58.406599 containerd[1449]: time="2026-03-11T02:25:58.406415756Z" level=info msg="RemovePodSandbox for \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\"" Mar 11 02:25:58.406599 containerd[1449]: time="2026-03-11T02:25:58.406498748Z" level=info msg="Forcibly stopping sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\"" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.489 [WARNING][5855] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"98451933-bb15-4d85-b793-b6047852572d", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0dc6fbf49a81d6c659559ec0a93de551fb2b7162971632e7ff0a0c2d98fcedaf", Pod:"calico-apiserver-c54d5dff8-bhbz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2483aa05959", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.490 [INFO][5855] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.490 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" iface="eth0" netns="" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.490 [INFO][5855] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.490 [INFO][5855] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.538 [INFO][5863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.538 [INFO][5863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.538 [INFO][5863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.554 [WARNING][5863] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.555 [INFO][5863] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" HandleID="k8s-pod-network.e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Workload="localhost-k8s-calico--apiserver--c54d5dff8--bhbz4-eth0" Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.559 [INFO][5863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:58.570771 containerd[1449]: 2026-03-11 02:25:58.564 [INFO][5855] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d" Mar 11 02:25:58.570771 containerd[1449]: time="2026-03-11T02:25:58.570670622Z" level=info msg="TearDown network for sandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" successfully" Mar 11 02:25:58.621370 containerd[1449]: time="2026-03-11T02:25:58.621165995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:58.621540 containerd[1449]: time="2026-03-11T02:25:58.621328535Z" level=info msg="RemovePodSandbox \"e32a34ee909dc77ad3ea018ae971fec2ac67460a7b7e610d33b6aa90883d3a0d\" returns successfully" Mar 11 02:25:58.623401 containerd[1449]: time="2026-03-11T02:25:58.623174233Z" level=info msg="StopPodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\"" Mar 11 02:25:58.842443 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:33870.service - OpenSSH per-connection server daemon (10.0.0.1:33870). Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.767 [WARNING][5882] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"4d15dd84-e281-4325-b44c-bfd2cf49adb4", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00", Pod:"calico-apiserver-c54d5dff8-grck2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e13a07d93b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.767 [INFO][5882] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.768 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" iface="eth0" netns="" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.769 [INFO][5882] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.769 [INFO][5882] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.840 [INFO][5891] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.840 [INFO][5891] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.840 [INFO][5891] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.855 [WARNING][5891] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.855 [INFO][5891] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.859 [INFO][5891] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:58.870141 containerd[1449]: 2026-03-11 02:25:58.866 [INFO][5882] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:58.873549 containerd[1449]: time="2026-03-11T02:25:58.870175294Z" level=info msg="TearDown network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" successfully" Mar 11 02:25:58.873549 containerd[1449]: time="2026-03-11T02:25:58.870210583Z" level=info msg="StopPodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" returns successfully" Mar 11 02:25:58.873549 containerd[1449]: time="2026-03-11T02:25:58.871360874Z" level=info msg="RemovePodSandbox for \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\"" Mar 11 02:25:58.873549 containerd[1449]: time="2026-03-11T02:25:58.871416634Z" level=info msg="Forcibly stopping sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\"" Mar 11 02:25:58.960117 sshd[5898]: Accepted publickey for core from 10.0.0.1 port 33870 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:58.960665 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:58.967607 systemd-logind[1434]: New session 10 of user core. Mar 11 02:25:58.975371 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:58.953 [WARNING][5910] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0", GenerateName:"calico-apiserver-c54d5dff8-", Namespace:"calico-system", SelfLink:"", UID:"4d15dd84-e281-4325-b44c-bfd2cf49adb4", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c54d5dff8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a8d01f6bbdbce1b26ca5f25c72ba06eb02c678a98c1b2c8effa12db4bdcd00", Pod:"calico-apiserver-c54d5dff8-grck2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6e13a07d93b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:58.954 [INFO][5910] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:58.954 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" iface="eth0" netns="" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:58.954 [INFO][5910] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:58.954 [INFO][5910] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.007 [INFO][5920] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.007 [INFO][5920] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.007 [INFO][5920] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.020 [WARNING][5920] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.020 [INFO][5920] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" HandleID="k8s-pod-network.6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Workload="localhost-k8s-calico--apiserver--c54d5dff8--grck2-eth0" Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.024 [INFO][5920] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:59.036108 containerd[1449]: 2026-03-11 02:25:59.029 [INFO][5910] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e" Mar 11 02:25:59.036108 containerd[1449]: time="2026-03-11T02:25:59.033666340Z" level=info msg="TearDown network for sandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" successfully" Mar 11 02:25:59.042688 containerd[1449]: time="2026-03-11T02:25:59.042551480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:59.042688 containerd[1449]: time="2026-03-11T02:25:59.042631976Z" level=info msg="RemovePodSandbox \"6cebdf993b726d5804582428f6ad845404569167fab7d86bd32d7e419738a05e\" returns successfully" Mar 11 02:25:59.044251 containerd[1449]: time="2026-03-11T02:25:59.044100744Z" level=info msg="StopPodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\"" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.140 [WARNING][5943] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pgwpq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d28eda4-868c-4032-bb0a-0cda62dbcd9a", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd", Pod:"coredns-66bc5c9577-pgwpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ba5f3fda8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.141 [INFO][5943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.141 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" iface="eth0" netns="" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.141 [INFO][5943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.141 [INFO][5943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.193 [INFO][5956] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.193 [INFO][5956] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.193 [INFO][5956] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.211 [WARNING][5956] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.211 [INFO][5956] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.216 [INFO][5956] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:59.229369 containerd[1449]: 2026-03-11 02:25:59.223 [INFO][5943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.229369 containerd[1449]: time="2026-03-11T02:25:59.229232128Z" level=info msg="TearDown network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" successfully" Mar 11 02:25:59.229369 containerd[1449]: time="2026-03-11T02:25:59.229264031Z" level=info msg="StopPodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" returns successfully" Mar 11 02:25:59.230726 containerd[1449]: time="2026-03-11T02:25:59.230694434Z" level=info msg="RemovePodSandbox for \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\"" Mar 11 02:25:59.231316 containerd[1449]: time="2026-03-11T02:25:59.231251536Z" level=info msg="Forcibly stopping sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\"" Mar 11 02:25:59.304202 sshd[5898]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:59.318157 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:33870.service: Deactivated successfully. Mar 11 02:25:59.322081 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 02:25:59.326175 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Mar 11 02:25:59.329099 systemd-logind[1434]: Removed session 10. Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.330 [WARNING][5975] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pgwpq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d28eda4-868c-4032-bb0a-0cda62dbcd9a", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da9add22be805eb10e943fd90c4887d7af368540ad2729eed0176094ba7af7dd", Pod:"coredns-66bc5c9577-pgwpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ba5f3fda8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.330 [INFO][5975] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.330 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" iface="eth0" netns="" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.330 [INFO][5975] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.330 [INFO][5975] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.369 [INFO][5985] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.370 [INFO][5985] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.370 [INFO][5985] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.382 [WARNING][5985] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.382 [INFO][5985] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" HandleID="k8s-pod-network.d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Workload="localhost-k8s-coredns--66bc5c9577--pgwpq-eth0" Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.385 [INFO][5985] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:59.392782 containerd[1449]: 2026-03-11 02:25:59.389 [INFO][5975] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a" Mar 11 02:25:59.392782 containerd[1449]: time="2026-03-11T02:25:59.392736483Z" level=info msg="TearDown network for sandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" successfully" Mar 11 02:25:59.400578 containerd[1449]: time="2026-03-11T02:25:59.400522237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:59.400720 containerd[1449]: time="2026-03-11T02:25:59.400614636Z" level=info msg="RemovePodSandbox \"d200bcb9dc20bf7d9d5a07fc679c243b35fafc09d5c714cd745d7d024f71d43a\" returns successfully" Mar 11 02:26:03.555540 kubelet[2504]: E0311 02:26:03.555416 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:04.147890 kubelet[2504]: I0311 02:26:04.147758 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-67dgv" podStartSLOduration=46.852125592 podStartE2EDuration="52.147737127s" podCreationTimestamp="2026-03-11 02:25:12 +0000 UTC" firstStartedPulling="2026-03-11 02:25:46.936881786 +0000 UTC m=+51.806490531" lastFinishedPulling="2026-03-11 02:25:52.23249332 +0000 UTC m=+57.102102066" observedRunningTime="2026-03-11 02:25:53.22030293 +0000 UTC m=+58.089911695" watchObservedRunningTime="2026-03-11 02:26:04.147737127 +0000 UTC m=+69.017345873" Mar 11 02:26:04.320789 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:33880.service - OpenSSH per-connection server daemon (10.0.0.1:33880). Mar 11 02:26:04.396065 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 33880 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:04.398366 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:04.406482 systemd-logind[1434]: New session 11 of user core. Mar 11 02:26:04.413201 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 02:26:04.602061 sshd[6019]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:04.607813 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:33880.service: Deactivated successfully. Mar 11 02:26:04.611861 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 02:26:04.617279 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Mar 11 02:26:04.619642 systemd-logind[1434]: Removed session 11. Mar 11 02:26:09.616867 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:48938.service - OpenSSH per-connection server daemon (10.0.0.1:48938). Mar 11 02:26:09.665790 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 48938 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:09.668456 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:09.675625 systemd-logind[1434]: New session 12 of user core. Mar 11 02:26:09.690581 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 02:26:09.861716 sshd[6060]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:09.871589 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:48938.service: Deactivated successfully. Mar 11 02:26:09.875558 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 02:26:09.877369 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Mar 11 02:26:09.879725 systemd-logind[1434]: Removed session 12. Mar 11 02:26:12.551711 kubelet[2504]: E0311 02:26:12.551516 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:14.889689 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:48950.service - OpenSSH per-connection server daemon (10.0.0.1:48950). Mar 11 02:26:15.011542 sshd[6107]: Accepted publickey for core from 10.0.0.1 port 48950 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:15.014408 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:15.024418 systemd-logind[1434]: New session 13 of user core. Mar 11 02:26:15.031358 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 02:26:15.338790 sshd[6107]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:15.348265 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:48950.service: Deactivated successfully. Mar 11 02:26:15.350283 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 02:26:15.352417 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Mar 11 02:26:15.358529 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:48966.service - OpenSSH per-connection server daemon (10.0.0.1:48966). Mar 11 02:26:15.360937 systemd-logind[1434]: Removed session 13. Mar 11 02:26:15.412869 sshd[6132]: Accepted publickey for core from 10.0.0.1 port 48966 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:15.415169 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:15.421742 systemd-logind[1434]: New session 14 of user core. Mar 11 02:26:15.436296 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 02:26:15.777390 sshd[6132]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:15.792840 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:48966.service: Deactivated successfully. Mar 11 02:26:15.798570 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 02:26:15.804497 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Mar 11 02:26:15.812706 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:48968.service - OpenSSH per-connection server daemon (10.0.0.1:48968). Mar 11 02:26:15.816635 systemd-logind[1434]: Removed session 14. Mar 11 02:26:15.878388 sshd[6154]: Accepted publickey for core from 10.0.0.1 port 48968 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:15.882492 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:15.894255 systemd-logind[1434]: New session 15 of user core. Mar 11 02:26:15.902227 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 02:26:16.098132 sshd[6154]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:16.103707 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:48968.service: Deactivated successfully. Mar 11 02:26:16.107093 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 02:26:16.108471 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Mar 11 02:26:16.110891 systemd-logind[1434]: Removed session 15. Mar 11 02:26:21.112896 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:47944.service - OpenSSH per-connection server daemon (10.0.0.1:47944). Mar 11 02:26:21.156140 sshd[6168]: Accepted publickey for core from 10.0.0.1 port 47944 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:21.159077 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:21.165021 systemd-logind[1434]: New session 16 of user core. Mar 11 02:26:21.174254 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 02:26:21.350666 sshd[6168]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:21.358264 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:47944.service: Deactivated successfully. Mar 11 02:26:21.360739 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 02:26:21.363033 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Mar 11 02:26:21.370462 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Mar 11 02:26:21.372681 systemd-logind[1434]: Removed session 16. Mar 11 02:26:21.427276 sshd[6182]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:21.429496 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:21.437990 systemd-logind[1434]: New session 17 of user core. Mar 11 02:26:21.448259 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 02:26:21.748725 sshd[6182]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:21.760117 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:47948.service: Deactivated successfully. Mar 11 02:26:21.762026 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 02:26:21.763546 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Mar 11 02:26:21.764945 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:47962.service - OpenSSH per-connection server daemon (10.0.0.1:47962). Mar 11 02:26:21.766238 systemd-logind[1434]: Removed session 17. Mar 11 02:26:21.826443 sshd[6195]: Accepted publickey for core from 10.0.0.1 port 47962 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:21.828227 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:21.835217 systemd-logind[1434]: New session 18 of user core. Mar 11 02:26:21.850229 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 02:26:22.432289 sshd[6195]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:22.446293 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:47962.service: Deactivated successfully. Mar 11 02:26:22.451755 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 02:26:22.455211 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Mar 11 02:26:22.463021 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:47976.service - OpenSSH per-connection server daemon (10.0.0.1:47976). Mar 11 02:26:22.468206 systemd-logind[1434]: Removed session 18. Mar 11 02:26:22.532685 sshd[6221]: Accepted publickey for core from 10.0.0.1 port 47976 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:22.533361 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:22.538823 systemd-logind[1434]: New session 19 of user core. Mar 11 02:26:22.546257 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 02:26:22.997222 sshd[6221]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:23.008848 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:47976.service: Deactivated successfully. Mar 11 02:26:23.012883 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 02:26:23.017358 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Mar 11 02:26:23.023487 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:47992.service - OpenSSH per-connection server daemon (10.0.0.1:47992). Mar 11 02:26:23.026147 systemd-logind[1434]: Removed session 19. Mar 11 02:26:23.090276 sshd[6233]: Accepted publickey for core from 10.0.0.1 port 47992 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:23.092719 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:23.099036 systemd-logind[1434]: New session 20 of user core. Mar 11 02:26:23.107208 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 02:26:23.283525 sshd[6233]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:23.294043 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:47992.service: Deactivated successfully. Mar 11 02:26:23.304322 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 02:26:23.305468 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Mar 11 02:26:23.308864 systemd-logind[1434]: Removed session 20. Mar 11 02:26:23.477403 kubelet[2504]: I0311 02:26:23.476868 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:26:25.551515 kubelet[2504]: E0311 02:26:25.551436 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:28.296885 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:47998.service - OpenSSH per-connection server daemon (10.0.0.1:47998). Mar 11 02:26:28.373269 sshd[6300]: Accepted publickey for core from 10.0.0.1 port 47998 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:28.376314 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:28.383130 systemd-logind[1434]: New session 21 of user core. Mar 11 02:26:28.395390 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 02:26:28.553204 kubelet[2504]: E0311 02:26:28.550897 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:28.643357 sshd[6300]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:28.651446 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:47998.service: Deactivated successfully. Mar 11 02:26:28.654108 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 02:26:28.655726 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Mar 11 02:26:28.657318 systemd-logind[1434]: Removed session 21. Mar 11 02:26:33.677144 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:57142.service - OpenSSH per-connection server daemon (10.0.0.1:57142). Mar 11 02:26:33.726896 sshd[6319]: Accepted publickey for core from 10.0.0.1 port 57142 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:33.730345 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:33.741364 systemd-logind[1434]: New session 22 of user core. Mar 11 02:26:33.751637 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 11 02:26:34.919473 sshd[6319]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:34.927171 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:57142.service: Deactivated successfully. Mar 11 02:26:34.935902 systemd[1]: session-22.scope: Deactivated successfully. Mar 11 02:26:34.980331 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Mar 11 02:26:34.984808 systemd-logind[1434]: Removed session 22. Mar 11 02:26:36.552347 kubelet[2504]: E0311 02:26:36.552244 2504 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:39.938347 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:40900.service - OpenSSH per-connection server daemon (10.0.0.1:40900). Mar 11 02:26:39.991813 sshd[6378]: Accepted publickey for core from 10.0.0.1 port 40900 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:39.994208 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:40.000712 systemd-logind[1434]: New session 23 of user core. Mar 11 02:26:40.008230 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 11 02:26:40.154780 sshd[6378]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:40.160898 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:40900.service: Deactivated successfully. Mar 11 02:26:40.163389 systemd[1]: session-23.scope: Deactivated successfully. Mar 11 02:26:40.164626 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Mar 11 02:26:40.166502 systemd-logind[1434]: Removed session 23.