Mar 7 02:12:48.091796 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 02:12:48.091816 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:12:48.091827 kernel: BIOS-provided physical RAM map: Mar 7 02:12:48.091833 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 02:12:48.091838 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 02:12:48.091843 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 02:12:48.091849 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 02:12:48.091855 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 02:12:48.091860 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 02:12:48.091868 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 02:12:48.091874 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 02:12:48.091879 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 02:12:48.091885 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 02:12:48.091890 kernel: NX (Execute Disable) protection: active Mar 7 02:12:48.091897 kernel: APIC: Static calls initialized Mar 7 02:12:48.091905 kernel: SMBIOS 2.8 present. Mar 7 02:12:48.091911 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 02:12:48.091916 kernel: Hypervisor detected: KVM Mar 7 02:12:48.091922 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 02:12:48.091928 kernel: kvm-clock: using sched offset of 3923129527 cycles Mar 7 02:12:48.091934 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 02:12:48.091940 kernel: tsc: Detected 2445.426 MHz processor Mar 7 02:12:48.091946 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 02:12:48.091952 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 02:12:48.092019 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 02:12:48.092031 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 02:12:48.092037 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 02:12:48.092042 kernel: Using GB pages for direct mapping Mar 7 02:12:48.092048 kernel: ACPI: Early table checksum verification disabled Mar 7 02:12:48.092054 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 02:12:48.092060 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092066 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092072 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092080 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 02:12:48.092086 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092092 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092098 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092104 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:12:48.092110 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 02:12:48.092115 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 02:12:48.092125 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 02:12:48.092134 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 02:12:48.092140 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 02:12:48.092146 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 02:12:48.092152 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 02:12:48.092158 kernel: No NUMA configuration found Mar 7 02:12:48.092164 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 02:12:48.092170 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 02:12:48.092179 kernel: Zone ranges: Mar 7 02:12:48.092185 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 02:12:48.092191 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 02:12:48.092197 kernel: Normal empty Mar 7 02:12:48.092204 kernel: Movable zone start for each node Mar 7 02:12:48.092209 kernel: Early memory node ranges Mar 7 02:12:48.092216 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 02:12:48.092222 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 02:12:48.092228 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 02:12:48.092236 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 02:12:48.092242 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 02:12:48.092248 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 02:12:48.092254 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 02:12:48.092260 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 02:12:48.092267 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 02:12:48.092273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 02:12:48.092279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 02:12:48.092285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 02:12:48.092293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 02:12:48.092299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 02:12:48.092305 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 02:12:48.092312 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 02:12:48.092318 kernel: TSC deadline timer available Mar 7 02:12:48.092324 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 02:12:48.092330 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 02:12:48.092336 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 02:12:48.092342 kernel: kvm-guest: setup PV sched yield Mar 7 02:12:48.092348 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 02:12:48.092356 kernel: Booting paravirtualized kernel on KVM Mar 7 02:12:48.092362 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 02:12:48.092369 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 02:12:48.092375 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 02:12:48.092381 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 02:12:48.092387 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 02:12:48.092393 kernel: kvm-guest: PV spinlocks enabled Mar 7 02:12:48.092399 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 02:12:48.092406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:12:48.092414 kernel: random: crng init done Mar 7 02:12:48.092421 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 02:12:48.092427 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 02:12:48.092433 kernel: Fallback order for Node 0: 0 Mar 7 02:12:48.092439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 02:12:48.092445 kernel: Policy zone: DMA32 Mar 7 02:12:48.092451 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 02:12:48.092457 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 02:12:48.092466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 02:12:48.092472 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 02:12:48.092478 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 02:12:48.092484 kernel: Dynamic Preempt: voluntary Mar 7 02:12:48.092491 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 02:12:48.092502 kernel: rcu: RCU event tracing is enabled. Mar 7 02:12:48.092508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 02:12:48.092515 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 02:12:48.092521 kernel: Rude variant of Tasks RCU enabled. Mar 7 02:12:48.092530 kernel: Tracing variant of Tasks RCU enabled. Mar 7 02:12:48.092536 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 02:12:48.092542 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 02:12:48.092548 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 02:12:48.092554 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 02:12:48.092560 kernel: Console: colour VGA+ 80x25 Mar 7 02:12:48.092566 kernel: printk: console [ttyS0] enabled Mar 7 02:12:48.092572 kernel: ACPI: Core revision 20230628 Mar 7 02:12:48.092579 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 02:12:48.092585 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 02:12:48.092593 kernel: x2apic enabled Mar 7 02:12:48.092599 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 02:12:48.092606 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 02:12:48.092612 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 02:12:48.092618 kernel: kvm-guest: setup PV IPIs Mar 7 02:12:48.092624 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 02:12:48.092640 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 02:12:48.092646 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 02:12:48.092653 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 02:12:48.092659 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 02:12:48.092665 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 02:12:48.092674 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 02:12:48.092680 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 02:12:48.092687 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 02:12:48.092693 kernel: Speculative Store Bypass: Vulnerable Mar 7 02:12:48.092700 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 02:12:48.092709 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 02:12:48.092715 kernel: active return thunk: srso_alias_return_thunk Mar 7 02:12:48.092722 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 02:12:48.092728 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 02:12:48.092735 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 02:12:48.092741 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 02:12:48.092748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 02:12:48.092754 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 02:12:48.092763 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 02:12:48.092770 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 02:12:48.092776 kernel: Freeing SMP alternatives memory: 32K Mar 7 02:12:48.092782 kernel: pid_max: default: 32768 minimum: 301 Mar 7 02:12:48.092789 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 02:12:48.092795 kernel: landlock: Up and running. Mar 7 02:12:48.092802 kernel: SELinux: Initializing. Mar 7 02:12:48.092808 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:12:48.092814 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:12:48.092823 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 02:12:48.092830 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:12:48.092836 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:12:48.092843 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:12:48.092849 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 02:12:48.092855 kernel: signal: max sigframe size: 1776 Mar 7 02:12:48.092862 kernel: rcu: Hierarchical SRCU implementation. Mar 7 02:12:48.092869 kernel: rcu: Max phase no-delay instances is 400. Mar 7 02:12:48.092875 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 02:12:48.092884 kernel: smp: Bringing up secondary CPUs ... Mar 7 02:12:48.092890 kernel: smpboot: x86: Booting SMP configuration: Mar 7 02:12:48.092896 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 02:12:48.092903 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 02:12:48.092909 kernel: smpboot: Max logical packages: 1 Mar 7 02:12:48.092915 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 02:12:48.092922 kernel: devtmpfs: initialized Mar 7 02:12:48.092928 kernel: x86/mm: Memory block size: 128MB Mar 7 02:12:48.092935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 02:12:48.092943 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 02:12:48.092950 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 02:12:48.093050 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 02:12:48.093057 kernel: audit: initializing netlink subsys (disabled) Mar 7 02:12:48.093064 kernel: audit: type=2000 audit(1772849566.536:1): state=initialized audit_enabled=0 res=1 Mar 7 02:12:48.093070 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 02:12:48.093077 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 02:12:48.093083 kernel: cpuidle: using governor menu Mar 7 02:12:48.093089 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 02:12:48.093100 kernel: dca service started, version 1.12.1 Mar 7 02:12:48.093107 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 02:12:48.093113 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 02:12:48.093120 kernel: PCI: Using configuration type 1 for base access Mar 7 02:12:48.093126 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 02:12:48.093133 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 02:12:48.093139 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 02:12:48.093146 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 02:12:48.093152 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 02:12:48.093161 kernel: ACPI: Added _OSI(Module Device) Mar 7 02:12:48.093167 kernel: ACPI: Added _OSI(Processor Device) Mar 7 02:12:48.093174 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 02:12:48.093180 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 02:12:48.093186 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 02:12:48.093193 kernel: ACPI: Interpreter enabled Mar 7 02:12:48.093199 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 02:12:48.093205 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 02:12:48.093212 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 02:12:48.093220 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 02:12:48.093227 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 02:12:48.093233 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 02:12:48.093410 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 02:12:48.093542 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 02:12:48.093667 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 02:12:48.093676 kernel: PCI host bridge to bus 0000:00 Mar 7 02:12:48.093808 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 02:12:48.093920 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 02:12:48.094192 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 02:12:48.094308 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 02:12:48.094416 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 02:12:48.094524 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 02:12:48.094632 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 02:12:48.094773 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 02:12:48.094906 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 02:12:48.095090 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 02:12:48.095214 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 02:12:48.095333 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 02:12:48.095452 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 02:12:48.095581 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 02:12:48.095709 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 02:12:48.095829 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 02:12:48.095948 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 02:12:48.096142 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 02:12:48.096265 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 02:12:48.096384 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 02:12:48.096509 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 02:12:48.096637 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 02:12:48.096757 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 02:12:48.096877 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 02:12:48.097094 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 02:12:48.097268 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 02:12:48.097397 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 02:12:48.097521 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 02:12:48.097647 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 02:12:48.097766 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 02:12:48.097885 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 02:12:48.098089 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 02:12:48.098214 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 02:12:48.098225 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 02:12:48.098239 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 02:12:48.098245 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 02:12:48.098252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 02:12:48.098259 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 02:12:48.098265 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 02:12:48.098272 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 02:12:48.098279 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 02:12:48.098285 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 02:12:48.098291 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 02:12:48.098300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 02:12:48.098307 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 02:12:48.098313 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 02:12:48.098320 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 02:12:48.098327 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 02:12:48.098333 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 02:12:48.098340 kernel: iommu: Default domain type: Translated Mar 7 02:12:48.098346 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 02:12:48.098352 kernel: PCI: Using ACPI for IRQ routing Mar 7 02:12:48.098362 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 02:12:48.098368 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 02:12:48.098375 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 02:12:48.098496 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 02:12:48.098615 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 02:12:48.098733 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 02:12:48.098741 kernel: vgaarb: loaded Mar 7 02:12:48.098748 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 02:12:48.098758 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 02:12:48.098765 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 02:12:48.098771 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 02:12:48.098778 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 02:12:48.098784 kernel: pnp: PnP ACPI init Mar 7 02:12:48.098916 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 02:12:48.098926 kernel: pnp: PnP ACPI: found 6 devices Mar 7 02:12:48.098933 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 02:12:48.098943 kernel: NET: Registered PF_INET protocol family Mar 7 02:12:48.098949 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 02:12:48.099020 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 02:12:48.099028 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 02:12:48.099035 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 02:12:48.099041 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 02:12:48.099048 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 02:12:48.099054 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:12:48.099061 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:12:48.099071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 02:12:48.099078 kernel: NET: Registered PF_XDP protocol family Mar 7 02:12:48.099202 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 02:12:48.099314 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 02:12:48.099423 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 02:12:48.099531 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 02:12:48.099639 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 02:12:48.099749 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 02:12:48.099762 kernel: PCI: CLS 0 bytes, default 64 Mar 7 02:12:48.099768 kernel: Initialise system trusted keyrings Mar 7 02:12:48.099775 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 02:12:48.099782 kernel: Key type asymmetric registered Mar 7 02:12:48.099788 kernel: Asymmetric key parser 'x509' registered Mar 7 02:12:48.099795 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 02:12:48.099802 kernel: io scheduler mq-deadline registered Mar 7 02:12:48.099808 kernel: io scheduler kyber registered Mar 7 02:12:48.099815 kernel: io scheduler bfq registered Mar 7 02:12:48.099821 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 02:12:48.099831 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 02:12:48.099838 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 02:12:48.099845 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 02:12:48.099851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 02:12:48.099858 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 02:12:48.099865 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 02:12:48.099871 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 02:12:48.099878 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 02:12:48.099884 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 02:12:48.100087 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 02:12:48.100208 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 02:12:48.100321 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T02:12:47 UTC (1772849567) Mar 7 02:12:48.100434 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 02:12:48.100442 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 02:12:48.100449 kernel: NET: Registered PF_INET6 protocol family Mar 7 02:12:48.100456 kernel: Segment Routing with IPv6 Mar 7 02:12:48.100467 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 02:12:48.100473 kernel: NET: Registered PF_PACKET protocol family Mar 7 02:12:48.100480 kernel: Key type dns_resolver registered Mar 7 02:12:48.100487 kernel: IPI shorthand broadcast: enabled Mar 7 02:12:48.100493 kernel: sched_clock: Marking stable (1058056635, 347836519)->(1764288684, -358395530) Mar 7 02:12:48.100500 kernel: registered taskstats version 1 Mar 7 02:12:48.100506 kernel: Loading compiled-in X.509 certificates Mar 7 02:12:48.100513 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 02:12:48.100519 kernel: Key type .fscrypt registered Mar 7 02:12:48.100526 kernel: Key type fscrypt-provisioning registered Mar 7 02:12:48.100535 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 02:12:48.100541 kernel: ima: Allocated hash algorithm: sha1 Mar 7 02:12:48.100548 kernel: ima: No architecture policies found Mar 7 02:12:48.100554 kernel: clk: Disabling unused clocks Mar 7 02:12:48.100561 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 02:12:48.100567 kernel: Write protecting the kernel read-only data: 36864k Mar 7 02:12:48.100574 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 02:12:48.100581 kernel: Run /init as init process Mar 7 02:12:48.100589 kernel: with arguments: Mar 7 02:12:48.100596 kernel: /init Mar 7 02:12:48.100602 kernel: with environment: Mar 7 02:12:48.100609 kernel: HOME=/ Mar 7 02:12:48.100615 kernel: TERM=linux Mar 7 02:12:48.100624 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:12:48.100632 systemd[1]: Detected virtualization kvm. Mar 7 02:12:48.100640 systemd[1]: Detected architecture x86-64. Mar 7 02:12:48.100649 systemd[1]: Running in initrd. Mar 7 02:12:48.100655 systemd[1]: No hostname configured, using default hostname. Mar 7 02:12:48.100662 systemd[1]: Hostname set to . Mar 7 02:12:48.100670 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:12:48.100677 systemd[1]: Queued start job for default target initrd.target. Mar 7 02:12:48.100684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:12:48.100691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:12:48.100698 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 02:12:48.100708 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:12:48.100715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 02:12:48.100722 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 02:12:48.100730 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 02:12:48.100737 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 02:12:48.100744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:12:48.100751 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:12:48.100761 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:12:48.100768 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:12:48.100775 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:12:48.100794 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:12:48.100803 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:12:48.100811 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:12:48.100820 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 02:12:48.100827 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 02:12:48.100834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:12:48.100841 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:12:48.100849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:12:48.100856 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:12:48.100863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 02:12:48.100870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:12:48.100877 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 02:12:48.100887 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 02:12:48.100894 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:12:48.100901 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:12:48.100908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:12:48.100915 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 02:12:48.100922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:12:48.100930 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 02:12:48.101015 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 02:12:48.101038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:12:48.101046 systemd-journald[194]: Journal started Mar 7 02:12:48.101061 systemd-journald[194]: Runtime Journal (/run/log/journal/367efae97362449c939b05d5a07d6128) is 6.0M, max 48.4M, 42.3M free. Mar 7 02:12:48.087119 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 02:12:48.222068 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 02:12:48.222090 kernel: Bridge firewalling registered Mar 7 02:12:48.222107 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:12:48.113162 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 02:12:48.209841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:12:48.210227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:12:48.210608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:12:48.231291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:12:48.235337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:12:48.239139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:12:48.246432 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:12:48.250533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:12:48.260159 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 02:12:48.286251 dracut-cmdline[225]: dracut-dracut-053 Mar 7 02:12:48.286251 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:12:48.265428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:12:48.271446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:12:48.278408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:12:48.317340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:12:48.358494 systemd-resolved[251]: Positive Trust Anchors: Mar 7 02:12:48.358532 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:12:48.358576 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:12:48.387593 kernel: SCSI subsystem initialized Mar 7 02:12:48.362177 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 7 02:12:48.363562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:12:48.367518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:12:48.402047 kernel: Loading iSCSI transport class v2.0-870. Mar 7 02:12:48.421092 kernel: iscsi: registered transport (tcp) Mar 7 02:12:48.453892 kernel: iscsi: registered transport (qla4xxx) Mar 7 02:12:48.454047 kernel: QLogic iSCSI HBA Driver Mar 7 02:12:48.516929 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 02:12:48.530338 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 02:12:48.568379 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 02:12:48.568488 kernel: device-mapper: uevent: version 1.0.3 Mar 7 02:12:48.571774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 02:12:48.627085 kernel: raid6: avx2x4 gen() 20623 MB/s Mar 7 02:12:48.645099 kernel: raid6: avx2x2 gen() 29553 MB/s Mar 7 02:12:48.663952 kernel: raid6: avx2x1 gen() 25072 MB/s Mar 7 02:12:48.664090 kernel: raid6: using algorithm avx2x2 gen() 29553 MB/s Mar 7 02:12:48.683021 kernel: raid6: .... xor() 29998 MB/s, rmw enabled Mar 7 02:12:48.683111 kernel: raid6: using avx2x2 recovery algorithm Mar 7 02:12:48.705155 kernel: xor: automatically using best checksumming function avx Mar 7 02:12:48.872065 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 02:12:48.885500 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:12:48.897264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:12:48.908827 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 7 02:12:48.913373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:12:48.917399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 02:12:48.939552 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Mar 7 02:12:48.972383 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:12:48.991242 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:12:49.057678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:12:49.071113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 02:12:49.085358 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 02:12:49.092556 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:12:49.099616 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:12:49.105661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:12:49.119327 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 02:12:49.118163 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 02:12:49.131448 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 02:12:49.131474 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 02:12:49.126743 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:12:49.139349 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 02:12:49.139364 kernel: GPT:9289727 != 19775487 Mar 7 02:12:49.126843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:12:49.150425 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 02:12:49.150444 kernel: GPT:9289727 != 19775487 Mar 7 02:12:49.150469 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 02:12:49.150479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:12:49.134554 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:12:49.156699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:12:49.157160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:12:49.160325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:12:49.176151 kernel: libata version 3.00 loaded. Mar 7 02:12:49.178712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:12:49.179246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:12:49.195280 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 02:12:49.198078 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 02:12:49.204032 kernel: AES CTR mode by8 optimization enabled Mar 7 02:12:49.204087 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 02:12:49.213030 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Mar 7 02:12:49.215033 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 02:12:49.215224 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 02:12:49.225364 kernel: scsi host0: ahci Mar 7 02:12:49.225545 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Mar 7 02:12:49.225558 kernel: scsi host1: ahci Mar 7 02:12:49.225881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 02:12:49.360539 kernel: scsi host2: ahci Mar 7 02:12:49.360731 kernel: scsi host3: ahci Mar 7 02:12:49.360883 kernel: scsi host4: ahci Mar 7 02:12:49.361086 kernel: scsi host5: ahci Mar 7 02:12:49.361236 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 02:12:49.361248 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 02:12:49.361258 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 02:12:49.361272 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 02:12:49.361281 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 02:12:49.361291 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 02:12:49.360931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:12:49.371293 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 02:12:49.381335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:12:49.390569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 02:12:49.393793 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 02:12:49.416191 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 02:12:49.425007 disk-uuid[554]: Primary Header is updated. Mar 7 02:12:49.425007 disk-uuid[554]: Secondary Entries is updated. Mar 7 02:12:49.425007 disk-uuid[554]: Secondary Header is updated. Mar 7 02:12:49.435697 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:12:49.435713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:12:49.436926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:12:49.477861 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:12:49.539018 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 02:12:49.543605 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 02:12:49.543671 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 02:12:49.544005 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 02:12:49.548050 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 02:12:49.551050 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 02:12:49.551072 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 02:12:49.553234 kernel: ata3.00: applying bridge limits Mar 7 02:12:49.554798 kernel: ata3.00: configured for UDMA/100 Mar 7 02:12:49.558078 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 02:12:49.606082 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 02:12:49.606375 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 02:12:49.619063 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 02:12:50.436023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:12:50.436800 disk-uuid[555]: The operation has completed successfully. Mar 7 02:12:50.465911 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 02:12:50.466116 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 02:12:50.490142 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 02:12:50.496043 sh[593]: Success Mar 7 02:12:50.511049 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 02:12:50.546613 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 02:12:50.563413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 02:12:50.566150 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 02:12:50.582139 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 02:12:50.582168 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:12:50.582179 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 02:12:50.584747 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 02:12:50.586661 kernel: BTRFS info (device dm-0): using free space tree Mar 7 02:12:50.594517 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 02:12:50.597322 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 02:12:50.614147 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 02:12:50.617678 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 02:12:50.630892 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:12:50.630919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:12:50.630931 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:12:50.638020 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:12:50.649336 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 02:12:50.653498 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:12:50.659231 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 02:12:50.670169 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 02:12:50.722726 ignition[689]: Ignition 2.19.0 Mar 7 02:12:50.723038 ignition[689]: Stage: fetch-offline Mar 7 02:12:50.723088 ignition[689]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:12:50.723102 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:12:50.723251 ignition[689]: parsed url from cmdline: "" Mar 7 02:12:50.723257 ignition[689]: no config URL provided Mar 7 02:12:50.723265 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 02:12:50.723278 ignition[689]: no config at "/usr/lib/ignition/user.ign" Mar 7 02:12:50.723316 ignition[689]: op(1): [started] loading QEMU firmware config module Mar 7 02:12:50.723324 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 02:12:50.741407 ignition[689]: op(1): [finished] loading QEMU firmware config module Mar 7 02:12:50.741426 ignition[689]: QEMU firmware config was not found. Ignoring... Mar 7 02:12:50.761165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:12:50.779172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:12:50.804496 systemd-networkd[782]: lo: Link UP Mar 7 02:12:50.804525 systemd-networkd[782]: lo: Gained carrier Mar 7 02:12:50.809727 systemd-networkd[782]: Enumeration completed Mar 7 02:12:50.810136 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:12:50.817315 systemd[1]: Reached target network.target - Network. Mar 7 02:12:50.822572 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:12:50.822595 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:12:50.832429 systemd-networkd[782]: eth0: Link UP Mar 7 02:12:50.832451 systemd-networkd[782]: eth0: Gained carrier Mar 7 02:12:50.832459 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:12:50.869031 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:12:50.920632 ignition[689]: parsing config with SHA512: 1535bfaee6481ad5581ea893128d3e0e344aeabf1c67a1c6e60d0b0fa7c99eb69da0eca65b6fe1215187b978aeb2391d223d5dd3473940804eb071b545004b23 Mar 7 02:12:50.923907 unknown[689]: fetched base config from "system" Mar 7 02:12:50.923920 unknown[689]: fetched user config from "qemu" Mar 7 02:12:50.924272 ignition[689]: fetch-offline: fetch-offline passed Mar 7 02:12:50.926684 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:12:50.924330 ignition[689]: Ignition finished successfully Mar 7 02:12:50.929126 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 02:12:50.945186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 02:12:50.958778 ignition[786]: Ignition 2.19.0 Mar 7 02:12:50.958800 ignition[786]: Stage: kargs Mar 7 02:12:50.959024 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:12:50.959036 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:12:50.959721 ignition[786]: kargs: kargs passed Mar 7 02:12:50.959760 ignition[786]: Ignition finished successfully Mar 7 02:12:50.972344 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 02:12:50.988208 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 02:12:51.001172 ignition[795]: Ignition 2.19.0 Mar 7 02:12:51.001190 ignition[795]: Stage: disks Mar 7 02:12:51.004430 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 02:12:51.001326 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:12:51.008178 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 02:12:51.001347 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:12:51.013330 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 02:12:51.001939 ignition[795]: disks: disks passed Mar 7 02:12:51.016430 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:12:51.002040 ignition[795]: Ignition finished successfully Mar 7 02:12:51.019024 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:12:51.022375 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:12:51.036125 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 02:12:51.053929 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 02:12:51.056847 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 02:12:51.062644 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 02:12:51.161033 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 02:12:51.161014 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 02:12:51.163692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 02:12:51.181066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:12:51.193156 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 7 02:12:51.193176 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:12:51.193187 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:12:51.184064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 02:12:51.204292 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:12:51.204316 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:12:51.197221 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 02:12:51.197257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 02:12:51.197277 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:12:51.205478 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 02:12:51.211372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:12:51.233132 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 02:12:51.272019 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 02:12:51.280038 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 7 02:12:51.284883 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 02:12:51.289692 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 02:12:51.388106 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 02:12:51.397119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 02:12:51.402156 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 02:12:51.409754 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:12:51.431438 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 02:12:51.438857 ignition[927]: INFO : Ignition 2.19.0 Mar 7 02:12:51.438857 ignition[927]: INFO : Stage: mount Mar 7 02:12:51.442479 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:12:51.442479 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:12:51.447781 ignition[927]: INFO : mount: mount passed Mar 7 02:12:51.447781 ignition[927]: INFO : Ignition finished successfully Mar 7 02:12:51.448215 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 02:12:51.461100 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 02:12:51.578465 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 02:12:51.593209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:12:51.604016 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 7 02:12:51.604045 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:12:51.608575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:12:51.608595 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:12:51.615039 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:12:51.616781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:12:51.652022 ignition[957]: INFO : Ignition 2.19.0 Mar 7 02:12:51.652022 ignition[957]: INFO : Stage: files Mar 7 02:12:51.656299 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:12:51.656299 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:12:51.656299 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 7 02:12:51.656299 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 02:12:51.656299 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 02:12:51.656299 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 02:12:51.675172 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 02:12:51.675172 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 02:12:51.675172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:12:51.675172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 02:12:51.656943 unknown[957]: wrote ssh authorized keys file for user: core Mar 7 02:12:51.735700 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 02:12:51.824564 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:12:51.824564 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:12:51.833952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 02:12:51.912226 systemd-networkd[782]: eth0: Gained IPv6LL Mar 7 02:12:52.190776 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 02:12:52.866152 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:12:52.866152 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 02:12:52.877664 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 02:12:52.919779 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:12:52.919779 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:12:52.919779 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 02:12:52.919779 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 02:12:52.919779 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 02:12:52.919779 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:12:52.919779 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:12:52.919779 ignition[957]: INFO : files: files passed Mar 7 02:12:52.919779 ignition[957]: INFO : Ignition finished successfully Mar 7 02:12:52.900323 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 02:12:52.920251 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 02:12:52.927198 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 02:12:52.933298 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 02:12:52.979195 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 02:12:52.933424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 02:12:52.985435 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:12:52.985435 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:12:52.945915 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:12:53.007123 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:12:52.949582 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 02:12:52.957396 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 02:12:52.987570 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 02:12:52.987711 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 02:12:52.992341 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 02:12:52.998683 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 02:12:53.001322 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 02:12:53.002251 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 02:12:53.021296 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:12:53.042167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 02:12:53.054017 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:12:53.057073 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:12:53.062638 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 02:12:53.067736 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 02:12:53.067858 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:12:53.073131 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 02:12:53.078050 systemd[1]: Stopped target basic.target - Basic System. Mar 7 02:12:53.083139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 02:12:53.088303 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:12:53.093449 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 02:12:53.098921 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 02:12:53.104310 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:12:53.110130 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 02:12:53.115166 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 02:12:53.120723 systemd[1]: Stopped target swap.target - Swaps. Mar 7 02:12:53.125286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 02:12:53.125404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:12:53.130555 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:12:53.135148 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:12:53.140448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 02:12:53.140622 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:12:53.146137 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 02:12:53.146251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 02:12:53.151526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 02:12:53.151645 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:12:53.157351 systemd[1]: Stopped target paths.target - Path Units. Mar 7 02:12:53.161846 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 02:12:53.165131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:12:53.168454 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 02:12:53.173065 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 02:12:53.177927 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 02:12:53.178119 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:12:53.182927 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 02:12:53.183130 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:12:53.188275 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 02:12:53.188420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:12:53.229487 ignition[1013]: INFO : Ignition 2.19.0 Mar 7 02:12:53.229487 ignition[1013]: INFO : Stage: umount Mar 7 02:12:53.229487 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:12:53.229487 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:12:53.229487 ignition[1013]: INFO : umount: umount passed Mar 7 02:12:53.229487 ignition[1013]: INFO : Ignition finished successfully Mar 7 02:12:53.193467 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 02:12:53.193587 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 02:12:53.209202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 02:12:53.213553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 02:12:53.213708 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:12:53.222189 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 02:12:53.226342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 02:12:53.226507 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:12:53.234555 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 02:12:53.234689 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:12:53.240551 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 02:12:53.240678 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 02:12:53.245944 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 02:12:53.246137 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 02:12:53.252854 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 02:12:53.253653 systemd[1]: Stopped target network.target - Network. Mar 7 02:12:53.255053 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 02:12:53.255116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 02:12:53.255681 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 02:12:53.255724 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 02:12:53.256809 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 02:12:53.256853 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 02:12:53.258773 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 02:12:53.258820 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 02:12:53.260194 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 02:12:53.261868 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 02:12:53.277808 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 02:12:53.278061 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 02:12:53.284108 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 7 02:12:53.284688 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 02:12:53.284771 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:12:53.289147 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 02:12:53.289313 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 02:12:53.294815 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 02:12:53.294877 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:12:53.311184 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 02:12:53.316041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 02:12:53.316109 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:12:53.321625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 02:12:53.321676 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:12:53.326998 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 02:12:53.327050 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 02:12:53.329829 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:12:53.335265 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 02:12:53.335390 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 02:12:53.348640 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 02:12:53.348798 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:12:53.473430 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 02:12:53.354088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 02:12:53.354142 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 02:12:53.357741 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 02:12:53.357783 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:12:53.362713 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 02:12:53.362762 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:12:53.365527 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 02:12:53.365576 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 02:12:53.370386 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:12:53.370436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:12:53.375813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 02:12:53.375862 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 02:12:53.391132 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 02:12:53.396339 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 02:12:53.396396 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:12:53.401799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:12:53.401847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:12:53.407679 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 02:12:53.407807 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 02:12:53.412153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 02:12:53.412266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 02:12:53.418208 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 02:12:53.434107 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 02:12:53.442362 systemd[1]: Switching root. Mar 7 02:12:53.537545 systemd-journald[194]: Journal stopped Mar 7 02:12:54.785660 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 02:12:54.785747 kernel: SELinux: policy capability open_perms=1 Mar 7 02:12:54.785768 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 02:12:54.785785 kernel: SELinux: policy capability always_check_network=0 Mar 7 02:12:54.785801 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 02:12:54.785822 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 02:12:54.785837 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 02:12:54.785852 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 02:12:54.785867 kernel: audit: type=1403 audit(1772849573.626:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 02:12:54.785887 systemd[1]: Successfully loaded SELinux policy in 46.935ms. Mar 7 02:12:54.785911 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.353ms. Mar 7 02:12:54.785929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:12:54.785946 systemd[1]: Detected virtualization kvm. Mar 7 02:12:54.786158 systemd[1]: Detected architecture x86-64. Mar 7 02:12:54.786179 systemd[1]: Detected first boot. Mar 7 02:12:54.786196 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:12:54.786213 zram_generator::config[1056]: No configuration found. Mar 7 02:12:54.786237 systemd[1]: Populated /etc with preset unit settings. Mar 7 02:12:54.786257 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 02:12:54.786274 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 02:12:54.786290 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 02:12:54.786307 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 02:12:54.786328 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 02:12:54.786344 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 02:12:54.786361 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 02:12:54.786377 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 02:12:54.786396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 02:12:54.786413 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 02:12:54.786429 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 02:12:54.786445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:12:54.786462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:12:54.786478 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 02:12:54.786494 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 02:12:54.786510 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 02:12:54.786528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:12:54.786548 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 02:12:54.786564 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:12:54.786581 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 02:12:54.786597 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 02:12:54.786613 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 02:12:54.786629 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 02:12:54.786645 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:12:54.786661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:12:54.786681 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:12:54.786697 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:12:54.786713 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 02:12:54.786729 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 02:12:54.786745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:12:54.786761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:12:54.786777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:12:54.786794 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 02:12:54.786810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 02:12:54.786830 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 02:12:54.786846 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 02:12:54.786862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:12:54.786880 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 02:12:54.786896 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 02:12:54.786913 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 02:12:54.786929 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 02:12:54.786945 systemd[1]: Reached target machines.target - Containers. Mar 7 02:12:54.787063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 02:12:54.787083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:12:54.787100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:12:54.787117 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 02:12:54.787133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:12:54.787149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:12:54.787166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:12:54.787181 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 02:12:54.787197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:12:54.787217 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 02:12:54.787234 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 02:12:54.787250 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 02:12:54.787266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 02:12:54.787282 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 02:12:54.787298 kernel: loop: module loaded Mar 7 02:12:54.787314 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:12:54.787333 kernel: ACPI: bus type drm_connector registered Mar 7 02:12:54.787348 kernel: fuse: init (API version 7.39) Mar 7 02:12:54.787367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:12:54.787384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 02:12:54.787426 systemd-journald[1137]: Collecting audit messages is disabled. Mar 7 02:12:54.787456 systemd-journald[1137]: Journal started Mar 7 02:12:54.787482 systemd-journald[1137]: Runtime Journal (/run/log/journal/367efae97362449c939b05d5a07d6128) is 6.0M, max 48.4M, 42.3M free. Mar 7 02:12:54.298948 systemd[1]: Queued start job for default target multi-user.target. Mar 7 02:12:54.324525 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 02:12:54.325309 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 02:12:54.325823 systemd[1]: systemd-journald.service: Consumed 1.356s CPU time. Mar 7 02:12:54.796487 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 02:12:54.805224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:12:54.810314 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 02:12:54.810377 systemd[1]: Stopped verity-setup.service. Mar 7 02:12:54.820060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:12:54.827823 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:12:54.829403 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 02:12:54.833089 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 02:12:54.836885 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 02:12:54.840442 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 02:12:54.844391 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 02:12:54.848352 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 02:12:54.852077 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 02:12:54.856416 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:12:54.861070 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 02:12:54.861362 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 02:12:54.866346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:12:54.866607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:12:54.870837 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:12:54.871187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:12:54.875549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:12:54.875806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:12:54.880647 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 02:12:54.880904 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 02:12:54.885303 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:12:54.885568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:12:54.889827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:12:54.894243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:12:54.899516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 02:12:54.918687 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 02:12:54.932228 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 02:12:54.937952 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 02:12:54.942033 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 02:12:54.942128 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:12:54.947387 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 02:12:54.955345 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 02:12:54.961096 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 02:12:54.964844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:12:54.967291 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 02:12:54.972937 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 02:12:54.977070 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:12:54.978853 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 02:12:54.982848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:12:54.990187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:12:54.996141 systemd-journald[1137]: Time spent on flushing to /var/log/journal/367efae97362449c939b05d5a07d6128 is 36.619ms for 940 entries. Mar 7 02:12:54.996141 systemd-journald[1137]: System Journal (/var/log/journal/367efae97362449c939b05d5a07d6128) is 8.0M, max 195.6M, 187.6M free. Mar 7 02:12:55.055527 systemd-journald[1137]: Received client request to flush runtime journal. Mar 7 02:12:55.055599 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 02:12:54.999420 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 02:12:55.019451 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 02:12:55.034719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:12:55.042206 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 02:12:55.052354 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 02:12:55.057566 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 02:12:55.068369 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 02:12:55.073517 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 02:12:55.079570 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:12:55.088042 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 02:12:55.096933 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 02:12:55.111331 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 02:12:55.121323 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 02:12:55.128177 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 02:12:55.138677 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 02:12:55.153323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:12:55.159092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 02:12:55.159806 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 02:12:55.171853 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 02:12:55.198323 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 02:12:55.198347 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 02:12:55.202053 kernel: loop2: detected capacity change from 0 to 217752 Mar 7 02:12:55.207747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:12:55.249295 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 02:12:55.269032 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 02:12:55.288047 kernel: loop5: detected capacity change from 0 to 217752 Mar 7 02:12:55.307298 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 02:12:55.308198 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 7 02:12:55.315811 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 02:12:55.315827 systemd[1]: Reloading... Mar 7 02:12:55.395032 zram_generator::config[1218]: No configuration found. Mar 7 02:12:55.503507 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 02:12:55.576422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:12:55.644644 systemd[1]: Reloading finished in 328 ms. Mar 7 02:12:55.689920 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 02:12:55.694335 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 02:12:55.698631 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 02:12:55.723374 systemd[1]: Starting ensure-sysext.service... Mar 7 02:12:55.727147 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:12:55.732835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:12:55.744146 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 7 02:12:55.744158 systemd[1]: Reloading... Mar 7 02:12:55.760404 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 02:12:55.761108 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 02:12:55.762141 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 02:12:55.762413 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 7 02:12:55.762531 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 7 02:12:55.766003 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:12:55.766081 systemd-tmpfiles[1260]: Skipping /boot Mar 7 02:12:55.766826 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Mar 7 02:12:55.779129 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:12:55.780106 systemd-tmpfiles[1260]: Skipping /boot Mar 7 02:12:55.801053 zram_generator::config[1290]: No configuration found. Mar 7 02:12:55.867063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1325) Mar 7 02:12:55.918030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 02:12:55.928034 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 7 02:12:55.929130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:12:55.933022 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 02:12:55.933249 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 02:12:55.938248 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 02:12:55.948037 kernel: ACPI: button: Power Button [PWRF] Mar 7 02:12:55.986475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:12:55.989911 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 02:12:55.990147 systemd[1]: Reloading finished in 245 ms. Mar 7 02:12:56.043056 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 02:12:56.057790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:12:56.061418 kernel: kvm_amd: TSC scaling supported Mar 7 02:12:56.061456 kernel: kvm_amd: Nested Virtualization enabled Mar 7 02:12:56.061479 kernel: kvm_amd: Nested Paging enabled Mar 7 02:12:56.065013 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 02:12:56.065048 kernel: kvm_amd: PMU virtualization is disabled Mar 7 02:12:56.096588 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:12:56.108019 kernel: EDAC MC: Ver: 3.0.0 Mar 7 02:12:56.122783 systemd[1]: Finished ensure-sysext.service. Mar 7 02:12:56.136687 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 02:12:56.151189 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:12:56.170183 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:12:56.174703 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 02:12:56.177880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:12:56.179029 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 02:12:56.184255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:12:56.190693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:12:56.194724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:12:56.198607 lvm[1366]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:12:56.200731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:12:56.203743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:12:56.206136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 02:12:56.210812 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 02:12:56.221138 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:12:56.228214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:12:56.230240 augenrules[1382]: No rules Mar 7 02:12:56.231250 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 02:12:56.235234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 02:12:56.239486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:12:56.242670 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:12:56.243887 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:12:56.247409 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 02:12:56.251304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:12:56.251483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:12:56.252068 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:12:56.252256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:12:56.253183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:12:56.253383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:12:56.253908 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:12:56.254132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:12:56.254842 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 02:12:56.255819 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 02:12:56.262613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:12:56.282405 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 02:12:56.282499 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:12:56.282558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:12:56.285246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 02:12:56.287796 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:12:56.290129 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 02:12:56.290661 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 02:12:56.291507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 02:12:56.302846 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 02:12:56.313252 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 02:12:56.326262 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 02:12:56.332225 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 02:12:56.392607 systemd-networkd[1381]: lo: Link UP Mar 7 02:12:56.392630 systemd-networkd[1381]: lo: Gained carrier Mar 7 02:12:56.394543 systemd-networkd[1381]: Enumeration completed Mar 7 02:12:56.395396 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:12:56.395417 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:12:56.396418 systemd-networkd[1381]: eth0: Link UP Mar 7 02:12:56.396437 systemd-networkd[1381]: eth0: Gained carrier Mar 7 02:12:56.396449 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:12:56.403795 systemd-resolved[1386]: Positive Trust Anchors: Mar 7 02:12:56.403825 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:12:56.403870 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:12:56.409036 systemd-resolved[1386]: Defaulting to hostname 'linux'. Mar 7 02:12:56.410067 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:12:56.410795 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Mar 7 02:12:57.056214 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 02:12:57.056280 systemd-timesyncd[1389]: Initial clock synchronization to Sat 2026-03-07 02:12:57.056142 UTC. Mar 7 02:12:57.056755 systemd-resolved[1386]: Clock change detected. Flushing caches. Mar 7 02:12:57.094748 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 02:12:57.097836 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:12:57.100717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:12:57.103911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:12:57.107476 systemd[1]: Reached target network.target - Network. Mar 7 02:12:57.109736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:12:57.112638 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:12:57.115273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 02:12:57.118304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 02:12:57.121434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 02:12:57.124456 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 02:12:57.124483 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:12:57.126724 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 02:12:57.129479 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 02:12:57.132227 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 02:12:57.135278 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:12:57.138128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 02:12:57.142249 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 02:12:57.151437 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 02:12:57.155613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 02:12:57.158881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 02:12:57.161684 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:12:57.164116 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:12:57.166400 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:12:57.166423 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:12:57.167447 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 02:12:57.171047 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 02:12:57.174291 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 02:12:57.177839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 02:12:57.180325 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 02:12:57.183787 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 02:12:57.187441 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 02:12:57.193742 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 02:12:57.199725 jq[1428]: false Mar 7 02:12:57.201891 extend-filesystems[1429]: Found loop3 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found loop4 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found loop5 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found sr0 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda1 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda2 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda3 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found usr Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda4 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda6 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda7 Mar 7 02:12:57.201891 extend-filesystems[1429]: Found vda9 Mar 7 02:12:57.201891 extend-filesystems[1429]: Checking size of /dev/vda9 Mar 7 02:12:57.325416 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 02:12:57.325442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1313) Mar 7 02:12:57.325457 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 02:12:57.203953 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 02:12:57.325688 extend-filesystems[1429]: Resized partition /dev/vda9 Mar 7 02:12:57.220224 dbus-daemon[1427]: [system] SELinux support is enabled Mar 7 02:12:57.212707 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 02:12:57.334782 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Mar 7 02:12:57.334782 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 02:12:57.334782 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 02:12:57.334782 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 02:12:57.218948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 02:12:57.347125 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Mar 7 02:12:57.219397 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 02:12:57.349932 update_engine[1448]: I20260307 02:12:57.250067 1448 main.cc:92] Flatcar Update Engine starting Mar 7 02:12:57.349932 update_engine[1448]: I20260307 02:12:57.268018 1448 update_check_scheduler.cc:74] Next update check in 4m30s Mar 7 02:12:57.223640 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 02:12:57.235641 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 02:12:57.354997 jq[1449]: true Mar 7 02:12:57.240648 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 02:12:57.355205 tar[1453]: linux-amd64/LICENSE Mar 7 02:12:57.355205 tar[1453]: linux-amd64/helm Mar 7 02:12:57.255933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 02:12:57.355690 jq[1455]: true Mar 7 02:12:57.256218 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 02:12:57.256688 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 02:12:57.256931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 02:12:57.263297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 02:12:57.263560 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 02:12:57.286336 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 02:12:57.297406 systemd-logind[1442]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 02:12:57.297427 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 02:12:57.301698 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 02:12:57.301985 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 02:12:57.302810 systemd-logind[1442]: New seat seat0. Mar 7 02:12:57.311758 systemd[1]: Started update-engine.service - Update Engine. Mar 7 02:12:57.314243 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 02:12:57.323639 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 02:12:57.323771 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 02:12:57.331097 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 02:12:57.331195 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 02:12:57.351816 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 02:12:57.368746 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Mar 7 02:12:57.370698 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 02:12:57.375085 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 02:12:57.383941 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 02:12:57.473331 containerd[1454]: time="2026-03-07T02:12:57.473230254Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 02:12:57.489745 containerd[1454]: time="2026-03-07T02:12:57.489594444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492020774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492043286Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492057333Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492199438Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492213384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492270741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492345 containerd[1454]: time="2026-03-07T02:12:57.492282463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492733 containerd[1454]: time="2026-03-07T02:12:57.492713478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492792 containerd[1454]: time="2026-03-07T02:12:57.492779210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492838 containerd[1454]: time="2026-03-07T02:12:57.492825577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:12:57.492875 containerd[1454]: time="2026-03-07T02:12:57.492865261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.493008 containerd[1454]: time="2026-03-07T02:12:57.492994082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.493273 containerd[1454]: time="2026-03-07T02:12:57.493256772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:12:57.493437 containerd[1454]: time="2026-03-07T02:12:57.493421089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:12:57.493494 containerd[1454]: time="2026-03-07T02:12:57.493482213Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 02:12:57.493706 containerd[1454]: time="2026-03-07T02:12:57.493690411Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 02:12:57.493801 containerd[1454]: time="2026-03-07T02:12:57.493787873Z" level=info msg="metadata content store policy set" policy=shared Mar 7 02:12:57.499000 containerd[1454]: time="2026-03-07T02:12:57.498980989Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 02:12:57.499110 containerd[1454]: time="2026-03-07T02:12:57.499094781Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 02:12:57.499161 containerd[1454]: time="2026-03-07T02:12:57.499149964Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 02:12:57.499248 containerd[1454]: time="2026-03-07T02:12:57.499233250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 02:12:57.499306 containerd[1454]: time="2026-03-07T02:12:57.499294223Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 02:12:57.499570 containerd[1454]: time="2026-03-07T02:12:57.499489878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 02:12:57.499803 containerd[1454]: time="2026-03-07T02:12:57.499785771Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.499967210Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.499985304Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.499996725Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500008257Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500018455Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500028514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500039254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500050445Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500061125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500070803Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500080331Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500099937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500111108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501319 containerd[1454]: time="2026-03-07T02:12:57.500120917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500131166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500140964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500152987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500162725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500172964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500183564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500194915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500204853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500214322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500223589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500236283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500252282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500267831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501643 containerd[1454]: time="2026-03-07T02:12:57.500276698Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500351578Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500367487Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500377055Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500386943Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500394708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500438390Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500450311Z" level=info msg="NRI interface is disabled by configuration." Mar 7 02:12:57.501871 containerd[1454]: time="2026-03-07T02:12:57.500463777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 02:12:57.501995 containerd[1454]: time="2026-03-07T02:12:57.500747697Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 02:12:57.501995 containerd[1454]: time="2026-03-07T02:12:57.500795987Z" level=info msg="Connect containerd service" Mar 7 02:12:57.501995 containerd[1454]: time="2026-03-07T02:12:57.500823488Z" level=info msg="using legacy CRI server" Mar 7 02:12:57.501995 containerd[1454]: time="2026-03-07T02:12:57.500830141Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 02:12:57.501995 containerd[1454]: time="2026-03-07T02:12:57.500914198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 02:12:57.502415 containerd[1454]: time="2026-03-07T02:12:57.502365738Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:12:57.502792 containerd[1454]: time="2026-03-07T02:12:57.502672240Z" level=info msg="Start subscribing containerd event" Mar 7 02:12:57.502792 containerd[1454]: time="2026-03-07T02:12:57.502779130Z" level=info msg="Start recovering state" Mar 7 02:12:57.502852 containerd[1454]: time="2026-03-07T02:12:57.502834434Z" level=info msg="Start event monitor" Mar 7 02:12:57.502852 containerd[1454]: time="2026-03-07T02:12:57.502850223Z" level=info msg="Start snapshots syncer" Mar 7 02:12:57.502884 containerd[1454]: time="2026-03-07T02:12:57.502858418Z" level=info msg="Start cni network conf syncer for default" Mar 7 02:12:57.502884 containerd[1454]: time="2026-03-07T02:12:57.502865080Z" level=info msg="Start streaming server" Mar 7 02:12:57.503233 containerd[1454]: time="2026-03-07T02:12:57.503215785Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 02:12:57.503459 containerd[1454]: time="2026-03-07T02:12:57.503443760Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 02:12:57.504675 containerd[1454]: time="2026-03-07T02:12:57.504638972Z" level=info msg="containerd successfully booted in 0.032257s" Mar 7 02:12:57.505273 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 02:12:57.523774 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 02:12:57.546185 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 02:12:57.560745 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 02:12:57.570152 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 02:12:57.570384 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 02:12:57.574688 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 02:12:57.589029 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 02:12:57.593630 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 02:12:57.597428 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 02:12:57.601151 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 02:12:57.760489 tar[1453]: linux-amd64/README.md Mar 7 02:12:57.776493 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 02:12:58.763971 systemd-networkd[1381]: eth0: Gained IPv6LL Mar 7 02:12:58.767073 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 02:12:58.770916 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 02:12:58.783736 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 02:12:58.787963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:12:58.791815 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 02:12:58.811745 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 02:12:58.812062 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 02:12:58.815699 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 02:12:58.819801 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 02:12:59.448629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:12:59.452008 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 02:12:59.455277 systemd[1]: Startup finished in 1.210s (kernel) + 5.889s (initrd) + 5.230s (userspace) = 12.330s. Mar 7 02:12:59.455366 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:12:59.805051 kubelet[1538]: E0307 02:12:59.804891 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:12:59.808146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:12:59.808379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:13:01.677229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 02:13:01.678701 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:58252.service - OpenSSH per-connection server daemon (10.0.0.1:58252). Mar 7 02:13:01.731789 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 58252 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:01.734146 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:01.744882 systemd-logind[1442]: New session 1 of user core. Mar 7 02:13:01.746404 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 02:13:01.760794 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 02:13:01.774164 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 02:13:01.777827 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 02:13:01.786596 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 02:13:01.888199 systemd[1555]: Queued start job for default target default.target. Mar 7 02:13:01.898005 systemd[1555]: Created slice app.slice - User Application Slice. Mar 7 02:13:01.898053 systemd[1555]: Reached target paths.target - Paths. Mar 7 02:13:01.898071 systemd[1555]: Reached target timers.target - Timers. Mar 7 02:13:01.900032 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 02:13:01.913234 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 02:13:01.913416 systemd[1555]: Reached target sockets.target - Sockets. Mar 7 02:13:01.913464 systemd[1555]: Reached target basic.target - Basic System. Mar 7 02:13:01.913581 systemd[1555]: Reached target default.target - Main User Target. Mar 7 02:13:01.913632 systemd[1555]: Startup finished in 119ms. Mar 7 02:13:01.913819 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 02:13:01.915854 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 02:13:01.974215 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:58258.service - OpenSSH per-connection server daemon (10.0.0.1:58258). Mar 7 02:13:02.031116 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 58258 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:02.033583 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:02.040872 systemd-logind[1442]: New session 2 of user core. Mar 7 02:13:02.056751 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 02:13:02.112644 sshd[1566]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:02.118947 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:58258.service: Deactivated successfully. Mar 7 02:13:02.120400 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 02:13:02.121932 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 7 02:13:02.136613 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:58260.service - OpenSSH per-connection server daemon (10.0.0.1:58260). Mar 7 02:13:02.137652 systemd-logind[1442]: Removed session 2. Mar 7 02:13:02.165113 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 58260 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:02.166673 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:02.170906 systemd-logind[1442]: New session 3 of user core. Mar 7 02:13:02.182649 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 02:13:02.233018 sshd[1573]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:02.240121 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:58260.service: Deactivated successfully. Mar 7 02:13:02.241818 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 02:13:02.243167 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 7 02:13:02.244404 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:58266.service - OpenSSH per-connection server daemon (10.0.0.1:58266). Mar 7 02:13:02.245299 systemd-logind[1442]: Removed session 3. Mar 7 02:13:02.280600 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 58266 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:02.282223 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:02.287476 systemd-logind[1442]: New session 4 of user core. Mar 7 02:13:02.297748 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 02:13:02.354719 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:02.367923 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:58266.service: Deactivated successfully. Mar 7 02:13:02.369398 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 02:13:02.370832 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 7 02:13:02.372137 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:58276.service - OpenSSH per-connection server daemon (10.0.0.1:58276). Mar 7 02:13:02.373040 systemd-logind[1442]: Removed session 4. Mar 7 02:13:02.403942 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:02.405299 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:02.409369 systemd-logind[1442]: New session 5 of user core. Mar 7 02:13:02.430658 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 02:13:02.490666 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 02:13:02.491017 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:13:02.510149 sudo[1590]: pam_unix(sudo:session): session closed for user root Mar 7 02:13:02.512181 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:02.527034 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:58276.service: Deactivated successfully. Mar 7 02:13:02.528641 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 02:13:02.529989 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 7 02:13:02.539785 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:58278.service - OpenSSH per-connection server daemon (10.0.0.1:58278). Mar 7 02:13:02.540747 systemd-logind[1442]: Removed session 5. Mar 7 02:13:02.566902 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 58278 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:02.568184 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:02.572298 systemd-logind[1442]: New session 6 of user core. Mar 7 02:13:02.581663 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 02:13:02.636325 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 02:13:02.636773 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:13:02.640992 sudo[1599]: pam_unix(sudo:session): session closed for user root Mar 7 02:13:02.646975 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 02:13:02.647311 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:13:02.669850 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 02:13:02.671882 auditctl[1602]: No rules Mar 7 02:13:02.672275 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 02:13:02.672555 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 02:13:02.675026 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:13:02.707012 augenrules[1620]: No rules Mar 7 02:13:02.708355 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:13:02.709347 sudo[1598]: pam_unix(sudo:session): session closed for user root Mar 7 02:13:02.711065 sshd[1595]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:02.720925 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:58278.service: Deactivated successfully. Mar 7 02:13:02.722400 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 02:13:02.723767 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 7 02:13:02.734958 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:58288.service - OpenSSH per-connection server daemon (10.0.0.1:58288). Mar 7 02:13:02.736084 systemd-logind[1442]: Removed session 6. Mar 7 02:13:02.761849 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 58288 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:02.763326 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:02.767574 systemd-logind[1442]: New session 7 of user core. Mar 7 02:13:02.777673 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 02:13:02.831627 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 02:13:02.831979 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:13:03.093816 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 02:13:03.094089 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 02:13:03.353248 dockerd[1650]: time="2026-03-07T02:13:03.353068507Z" level=info msg="Starting up" Mar 7 02:13:03.569033 dockerd[1650]: time="2026-03-07T02:13:03.568296793Z" level=info msg="Loading containers: start." Mar 7 02:13:03.753541 kernel: Initializing XFRM netlink socket Mar 7 02:13:03.841454 systemd-networkd[1381]: docker0: Link UP Mar 7 02:13:03.873595 dockerd[1650]: time="2026-03-07T02:13:03.873540183Z" level=info msg="Loading containers: done." Mar 7 02:13:03.888032 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck643938804-merged.mount: Deactivated successfully. Mar 7 02:13:03.889755 dockerd[1650]: time="2026-03-07T02:13:03.889696814Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 02:13:03.889833 dockerd[1650]: time="2026-03-07T02:13:03.889791220Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 02:13:03.889917 dockerd[1650]: time="2026-03-07T02:13:03.889886579Z" level=info msg="Daemon has completed initialization" Mar 7 02:13:03.934418 dockerd[1650]: time="2026-03-07T02:13:03.934319626Z" level=info msg="API listen on /run/docker.sock" Mar 7 02:13:03.934587 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 02:13:04.339120 containerd[1454]: time="2026-03-07T02:13:04.339056499Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 02:13:04.994785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459480047.mount: Deactivated successfully. Mar 7 02:13:06.311939 containerd[1454]: time="2026-03-07T02:13:06.311869104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:06.312651 containerd[1454]: time="2026-03-07T02:13:06.312612542Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 02:13:06.313812 containerd[1454]: time="2026-03-07T02:13:06.313765695Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:06.316660 containerd[1454]: time="2026-03-07T02:13:06.316613161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:06.317905 containerd[1454]: time="2026-03-07T02:13:06.317866791Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 1.978758706s" Mar 7 02:13:06.317952 containerd[1454]: time="2026-03-07T02:13:06.317908820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 02:13:06.318632 containerd[1454]: time="2026-03-07T02:13:06.318603237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 02:13:07.645569 containerd[1454]: time="2026-03-07T02:13:07.645436745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:07.646557 containerd[1454]: time="2026-03-07T02:13:07.646458361Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 02:13:07.647921 containerd[1454]: time="2026-03-07T02:13:07.647870534Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:07.650645 containerd[1454]: time="2026-03-07T02:13:07.650607669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:07.651742 containerd[1454]: time="2026-03-07T02:13:07.651705609Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.333075281s" Mar 7 02:13:07.651795 containerd[1454]: time="2026-03-07T02:13:07.651744271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 02:13:07.652278 containerd[1454]: time="2026-03-07T02:13:07.652236663Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 02:13:08.598213 containerd[1454]: time="2026-03-07T02:13:08.598073402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:08.599092 containerd[1454]: time="2026-03-07T02:13:08.599050893Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 02:13:08.600108 containerd[1454]: time="2026-03-07T02:13:08.600062186Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:08.603083 containerd[1454]: time="2026-03-07T02:13:08.603035294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:08.603933 containerd[1454]: time="2026-03-07T02:13:08.603900677Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 951.621004ms" Mar 7 02:13:08.603965 containerd[1454]: time="2026-03-07T02:13:08.603936694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 02:13:08.604473 containerd[1454]: time="2026-03-07T02:13:08.604412653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 02:13:09.499273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626997604.mount: Deactivated successfully. Mar 7 02:13:09.698015 containerd[1454]: time="2026-03-07T02:13:09.697941805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:09.699084 containerd[1454]: time="2026-03-07T02:13:09.699025909Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 02:13:09.700189 containerd[1454]: time="2026-03-07T02:13:09.700137023Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:09.702470 containerd[1454]: time="2026-03-07T02:13:09.702422620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:09.703158 containerd[1454]: time="2026-03-07T02:13:09.703121266Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.098650645s" Mar 7 02:13:09.703207 containerd[1454]: time="2026-03-07T02:13:09.703162273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 02:13:09.703858 containerd[1454]: time="2026-03-07T02:13:09.703700015Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 02:13:09.877993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 02:13:09.887697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:13:10.052297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:10.056776 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:13:10.099903 kubelet[1882]: E0307 02:13:10.099857 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:13:10.105310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:13:10.105585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:13:10.246826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733810113.mount: Deactivated successfully. Mar 7 02:13:11.363763 containerd[1454]: time="2026-03-07T02:13:11.363681965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:11.364829 containerd[1454]: time="2026-03-07T02:13:11.364783966Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 02:13:11.366762 containerd[1454]: time="2026-03-07T02:13:11.366693890Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:11.369806 containerd[1454]: time="2026-03-07T02:13:11.369743796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:11.371148 containerd[1454]: time="2026-03-07T02:13:11.371102460Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.667372219s" Mar 7 02:13:11.371202 containerd[1454]: time="2026-03-07T02:13:11.371147685Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 02:13:11.371734 containerd[1454]: time="2026-03-07T02:13:11.371656053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 02:13:11.743427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169872617.mount: Deactivated successfully. Mar 7 02:13:11.751003 containerd[1454]: time="2026-03-07T02:13:11.750929577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:11.751798 containerd[1454]: time="2026-03-07T02:13:11.751720460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 02:13:11.752786 containerd[1454]: time="2026-03-07T02:13:11.752737318Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:11.755443 containerd[1454]: time="2026-03-07T02:13:11.755360095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:11.756479 containerd[1454]: time="2026-03-07T02:13:11.756430974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 384.705972ms" Mar 7 02:13:11.756479 containerd[1454]: time="2026-03-07T02:13:11.756469927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 02:13:11.757246 containerd[1454]: time="2026-03-07T02:13:11.757194158Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 02:13:12.201285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801574727.mount: Deactivated successfully. Mar 7 02:13:13.023925 containerd[1454]: time="2026-03-07T02:13:13.023854099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:13.024956 containerd[1454]: time="2026-03-07T02:13:13.024914150Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 02:13:13.026204 containerd[1454]: time="2026-03-07T02:13:13.026172703Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:13.028715 containerd[1454]: time="2026-03-07T02:13:13.028673469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:13.029586 containerd[1454]: time="2026-03-07T02:13:13.029541191Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.272298503s" Mar 7 02:13:13.029586 containerd[1454]: time="2026-03-07T02:13:13.029579604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 02:13:14.710560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:14.721918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:13:14.768308 systemd[1]: Reloading requested from client PID 2041 ('systemctl') (unit session-7.scope)... Mar 7 02:13:14.768351 systemd[1]: Reloading... Mar 7 02:13:14.900688 zram_generator::config[2083]: No configuration found. Mar 7 02:13:15.074677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:13:15.166894 systemd[1]: Reloading finished in 397 ms. Mar 7 02:13:15.218976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:15.222219 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:13:15.223921 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:13:15.224227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:15.237953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:13:15.390398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:15.395304 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:13:15.443646 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:13:15.600183 kubelet[2130]: I0307 02:13:15.600094 2130 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:13:15.600183 kubelet[2130]: I0307 02:13:15.600149 2130 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:13:15.600183 kubelet[2130]: I0307 02:13:15.600168 2130 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:13:15.600183 kubelet[2130]: I0307 02:13:15.600174 2130 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:13:15.600428 kubelet[2130]: I0307 02:13:15.600393 2130 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:13:15.665361 kubelet[2130]: I0307 02:13:15.665133 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:13:15.666109 kubelet[2130]: E0307 02:13:15.665799 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:13:15.669753 kubelet[2130]: E0307 02:13:15.669716 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:13:15.669835 kubelet[2130]: I0307 02:13:15.669802 2130 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:13:15.676679 kubelet[2130]: I0307 02:13:15.676648 2130 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:13:15.677569 kubelet[2130]: I0307 02:13:15.677425 2130 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:13:15.677692 kubelet[2130]: I0307 02:13:15.677471 2130 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:13:15.677792 kubelet[2130]: I0307 02:13:15.677695 2130 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:13:15.677792 kubelet[2130]: I0307 02:13:15.677704 2130 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:13:15.677843 kubelet[2130]: I0307 02:13:15.677801 2130 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:13:15.679969 kubelet[2130]: I0307 02:13:15.679927 2130 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:13:15.680249 kubelet[2130]: I0307 02:13:15.680210 2130 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:13:15.680249 kubelet[2130]: I0307 02:13:15.680242 2130 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:13:15.680325 kubelet[2130]: I0307 02:13:15.680299 2130 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:13:15.680325 kubelet[2130]: I0307 02:13:15.680317 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:13:15.682442 kubelet[2130]: I0307 02:13:15.682380 2130 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:13:15.684768 kubelet[2130]: I0307 02:13:15.684580 2130 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:13:15.685372 kubelet[2130]: I0307 02:13:15.684851 2130 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:13:15.685372 kubelet[2130]: W0307 02:13:15.684924 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 02:13:15.688915 kubelet[2130]: I0307 02:13:15.688591 2130 server.go:1257] "Started kubelet" Mar 7 02:13:15.690712 kubelet[2130]: I0307 02:13:15.690327 2130 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:13:15.692290 kubelet[2130]: I0307 02:13:15.691793 2130 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:13:15.693804 kubelet[2130]: I0307 02:13:15.693250 2130 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:13:15.693804 kubelet[2130]: I0307 02:13:15.693307 2130 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:13:15.693804 kubelet[2130]: I0307 02:13:15.693680 2130 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:13:15.693917 kubelet[2130]: I0307 02:13:15.693887 2130 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:13:15.695174 kubelet[2130]: I0307 02:13:15.695093 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:13:15.695266 kubelet[2130]: E0307 02:13:15.694227 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6d5077364898 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 02:13:15.688568984 +0000 UTC m=+0.289140830,LastTimestamp:2026-03-07 02:13:15.688568984 +0000 UTC m=+0.289140830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 02:13:15.695819 kubelet[2130]: I0307 02:13:15.695757 2130 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:13:15.695866 kubelet[2130]: I0307 02:13:15.695843 2130 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:13:15.695892 kubelet[2130]: I0307 02:13:15.695876 2130 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:13:15.696459 kubelet[2130]: E0307 02:13:15.696384 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:15.696626 kubelet[2130]: E0307 02:13:15.696594 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Mar 7 02:13:15.697920 kubelet[2130]: I0307 02:13:15.697852 2130 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:13:15.697971 kubelet[2130]: I0307 02:13:15.697950 2130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:13:15.698911 kubelet[2130]: I0307 02:13:15.698801 2130 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:13:15.699789 kubelet[2130]: E0307 02:13:15.699720 2130 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:13:15.714854 kubelet[2130]: I0307 02:13:15.714808 2130 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:13:15.714854 kubelet[2130]: I0307 02:13:15.714835 2130 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:13:15.714854 kubelet[2130]: I0307 02:13:15.714851 2130 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:13:15.717397 kubelet[2130]: I0307 02:13:15.717369 2130 policy_none.go:50] "Start" Mar 7 02:13:15.717397 kubelet[2130]: I0307 02:13:15.717385 2130 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:13:15.717397 kubelet[2130]: I0307 02:13:15.717397 2130 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:13:15.717793 kubelet[2130]: I0307 02:13:15.717747 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:13:15.719416 kubelet[2130]: I0307 02:13:15.719391 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:13:15.719463 kubelet[2130]: I0307 02:13:15.719456 2130 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:13:15.719605 kubelet[2130]: I0307 02:13:15.719581 2130 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:13:15.719803 kubelet[2130]: E0307 02:13:15.719698 2130 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:13:15.721120 kubelet[2130]: I0307 02:13:15.720400 2130 policy_none.go:44] "Start" Mar 7 02:13:15.726107 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 02:13:15.742146 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 02:13:15.746076 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 02:13:15.757601 kubelet[2130]: E0307 02:13:15.757550 2130 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:13:15.758033 kubelet[2130]: I0307 02:13:15.757983 2130 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:13:15.758120 kubelet[2130]: I0307 02:13:15.758014 2130 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:13:15.758940 kubelet[2130]: I0307 02:13:15.758656 2130 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:13:15.759817 kubelet[2130]: E0307 02:13:15.759758 2130 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:13:15.759817 kubelet[2130]: E0307 02:13:15.759813 2130 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:13:15.833244 systemd[1]: Created slice kubepods-burstable-podf5dcb3623ce236345f85d5cd417da200.slice - libcontainer container kubepods-burstable-podf5dcb3623ce236345f85d5cd417da200.slice. Mar 7 02:13:15.846467 kubelet[2130]: E0307 02:13:15.846412 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:15.849628 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 7 02:13:15.852161 kubelet[2130]: E0307 02:13:15.851942 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:15.854383 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 7 02:13:15.856108 kubelet[2130]: E0307 02:13:15.856061 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:15.861267 kubelet[2130]: I0307 02:13:15.861184 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:13:15.861704 kubelet[2130]: E0307 02:13:15.861663 2130 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Mar 7 02:13:15.897283 kubelet[2130]: I0307 02:13:15.897152 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5dcb3623ce236345f85d5cd417da200-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5dcb3623ce236345f85d5cd417da200\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:15.897283 kubelet[2130]: I0307 02:13:15.897215 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5dcb3623ce236345f85d5cd417da200-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5dcb3623ce236345f85d5cd417da200\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:15.897436 kubelet[2130]: E0307 02:13:15.897276 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Mar 7 02:13:15.897436 kubelet[2130]: I0307 02:13:15.897300 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:15.897436 kubelet[2130]: I0307 02:13:15.897387 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:15.897436 kubelet[2130]: I0307 02:13:15.897423 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:15.897436 kubelet[2130]: I0307 02:13:15.897441 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:15.897644 kubelet[2130]: I0307 02:13:15.897465 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:15.897644 kubelet[2130]: I0307 02:13:15.897542 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:15.897644 kubelet[2130]: I0307 02:13:15.897559 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5dcb3623ce236345f85d5cd417da200-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5dcb3623ce236345f85d5cd417da200\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:16.064201 kubelet[2130]: I0307 02:13:16.064041 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:13:16.064420 kubelet[2130]: E0307 02:13:16.064390 2130 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Mar 7 02:13:16.150335 kubelet[2130]: E0307 02:13:16.150251 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:16.151374 containerd[1454]: time="2026-03-07T02:13:16.151327115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5dcb3623ce236345f85d5cd417da200,Namespace:kube-system,Attempt:0,}" Mar 7 02:13:16.154836 kubelet[2130]: E0307 02:13:16.154791 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:16.155327 containerd[1454]: time="2026-03-07T02:13:16.155286106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 7 02:13:16.161572 kubelet[2130]: E0307 02:13:16.161434 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:16.162185 containerd[1454]: time="2026-03-07T02:13:16.162111966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 7 02:13:16.298096 kubelet[2130]: E0307 02:13:16.297998 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Mar 7 02:13:16.469941 kubelet[2130]: I0307 02:13:16.469800 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:13:16.470470 kubelet[2130]: E0307 02:13:16.470243 2130 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Mar 7 02:13:16.610145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930875843.mount: Deactivated successfully. Mar 7 02:13:16.619180 containerd[1454]: time="2026-03-07T02:13:16.619077531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:13:16.623969 containerd[1454]: time="2026-03-07T02:13:16.623864793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 02:13:16.625136 containerd[1454]: time="2026-03-07T02:13:16.624880421Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:13:16.628014 containerd[1454]: time="2026-03-07T02:13:16.627187920Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:13:16.629032 containerd[1454]: time="2026-03-07T02:13:16.628964354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:13:16.629967 containerd[1454]: time="2026-03-07T02:13:16.629753993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:13:16.630820 containerd[1454]: time="2026-03-07T02:13:16.630710546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:13:16.634764 containerd[1454]: time="2026-03-07T02:13:16.634686308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:13:16.637633 containerd[1454]: time="2026-03-07T02:13:16.637575492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.384729ms" Mar 7 02:13:16.638577 containerd[1454]: time="2026-03-07T02:13:16.638422762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.050365ms" Mar 7 02:13:16.639700 containerd[1454]: time="2026-03-07T02:13:16.639367157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.951577ms" Mar 7 02:13:16.817258 containerd[1454]: time="2026-03-07T02:13:16.816253393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:16.817258 containerd[1454]: time="2026-03-07T02:13:16.816397952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:16.817258 containerd[1454]: time="2026-03-07T02:13:16.816418441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:16.817974 containerd[1454]: time="2026-03-07T02:13:16.817726923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:16.820623 containerd[1454]: time="2026-03-07T02:13:16.820292071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:16.820688 containerd[1454]: time="2026-03-07T02:13:16.820619202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:16.820873 containerd[1454]: time="2026-03-07T02:13:16.820719199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:16.821180 containerd[1454]: time="2026-03-07T02:13:16.820977341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:16.823568 containerd[1454]: time="2026-03-07T02:13:16.822644465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:16.823568 containerd[1454]: time="2026-03-07T02:13:16.822710228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:16.823568 containerd[1454]: time="2026-03-07T02:13:16.822737979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:16.823568 containerd[1454]: time="2026-03-07T02:13:16.822898368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:16.852849 systemd[1]: Started cri-containerd-831ad355dfec132dbe91bba35477696673f47ec25dd32cecdbf786b2c3c340da.scope - libcontainer container 831ad355dfec132dbe91bba35477696673f47ec25dd32cecdbf786b2c3c340da. Mar 7 02:13:16.859104 systemd[1]: Started cri-containerd-5c631e31f1c35f26b85fd02dd13efb3a38f6f021b8b6a425ae39071a612a3e21.scope - libcontainer container 5c631e31f1c35f26b85fd02dd13efb3a38f6f021b8b6a425ae39071a612a3e21. Mar 7 02:13:16.862679 systemd[1]: Started cri-containerd-f32b745117c5934807677c44400e6978c34bd17826416e7d1e3c9bc2e639d3ba.scope - libcontainer container f32b745117c5934807677c44400e6978c34bd17826416e7d1e3c9bc2e639d3ba. Mar 7 02:13:16.927014 containerd[1454]: time="2026-03-07T02:13:16.926669602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"831ad355dfec132dbe91bba35477696673f47ec25dd32cecdbf786b2c3c340da\"" Mar 7 02:13:16.932170 containerd[1454]: time="2026-03-07T02:13:16.932084782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5dcb3623ce236345f85d5cd417da200,Namespace:kube-system,Attempt:0,} returns sandbox id \"f32b745117c5934807677c44400e6978c34bd17826416e7d1e3c9bc2e639d3ba\"" Mar 7 02:13:16.933653 kubelet[2130]: E0307 02:13:16.933462 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:16.934097 kubelet[2130]: E0307 02:13:16.934046 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:16.938753 containerd[1454]: time="2026-03-07T02:13:16.938672682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c631e31f1c35f26b85fd02dd13efb3a38f6f021b8b6a425ae39071a612a3e21\"" Mar 7 02:13:16.941051 kubelet[2130]: E0307 02:13:16.940954 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:16.946616 containerd[1454]: time="2026-03-07T02:13:16.944322383Z" level=info msg="CreateContainer within sandbox \"f32b745117c5934807677c44400e6978c34bd17826416e7d1e3c9bc2e639d3ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 02:13:16.953144 containerd[1454]: time="2026-03-07T02:13:16.952728133Z" level=info msg="CreateContainer within sandbox \"831ad355dfec132dbe91bba35477696673f47ec25dd32cecdbf786b2c3c340da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 02:13:16.958285 containerd[1454]: time="2026-03-07T02:13:16.958207573Z" level=info msg="CreateContainer within sandbox \"5c631e31f1c35f26b85fd02dd13efb3a38f6f021b8b6a425ae39071a612a3e21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 02:13:16.987187 containerd[1454]: time="2026-03-07T02:13:16.985365219Z" level=info msg="CreateContainer within sandbox \"f32b745117c5934807677c44400e6978c34bd17826416e7d1e3c9bc2e639d3ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df88937dd50c8fc8b23bd01fbf857beaf01f100350e5592bf97b7891443a5afb\"" Mar 7 02:13:16.987187 containerd[1454]: time="2026-03-07T02:13:16.986810027Z" level=info msg="StartContainer for \"df88937dd50c8fc8b23bd01fbf857beaf01f100350e5592bf97b7891443a5afb\"" Mar 7 02:13:16.990150 containerd[1454]: time="2026-03-07T02:13:16.988990295Z" level=info msg="CreateContainer within sandbox \"831ad355dfec132dbe91bba35477696673f47ec25dd32cecdbf786b2c3c340da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"18ec6c6c56a7b32d6a4447f80bb1884cdbc619c79541eb3a65395084c83ab880\"" Mar 7 02:13:16.994610 containerd[1454]: time="2026-03-07T02:13:16.991740105Z" level=info msg="StartContainer for \"18ec6c6c56a7b32d6a4447f80bb1884cdbc619c79541eb3a65395084c83ab880\"" Mar 7 02:13:16.994610 containerd[1454]: time="2026-03-07T02:13:16.993741562Z" level=info msg="CreateContainer within sandbox \"5c631e31f1c35f26b85fd02dd13efb3a38f6f021b8b6a425ae39071a612a3e21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8dfd7caad594a0a6447cadb900c9143013da5de7563517ece6ea8e0f8e39ae72\"" Mar 7 02:13:16.994610 containerd[1454]: time="2026-03-07T02:13:16.994373402Z" level=info msg="StartContainer for \"8dfd7caad594a0a6447cadb900c9143013da5de7563517ece6ea8e0f8e39ae72\"" Mar 7 02:13:17.039819 systemd[1]: Started cri-containerd-df88937dd50c8fc8b23bd01fbf857beaf01f100350e5592bf97b7891443a5afb.scope - libcontainer container df88937dd50c8fc8b23bd01fbf857beaf01f100350e5592bf97b7891443a5afb. Mar 7 02:13:17.061048 systemd[1]: Started cri-containerd-18ec6c6c56a7b32d6a4447f80bb1884cdbc619c79541eb3a65395084c83ab880.scope - libcontainer container 18ec6c6c56a7b32d6a4447f80bb1884cdbc619c79541eb3a65395084c83ab880. Mar 7 02:13:17.064257 systemd[1]: Started cri-containerd-8dfd7caad594a0a6447cadb900c9143013da5de7563517ece6ea8e0f8e39ae72.scope - libcontainer container 8dfd7caad594a0a6447cadb900c9143013da5de7563517ece6ea8e0f8e39ae72. Mar 7 02:13:17.099116 kubelet[2130]: E0307 02:13:17.098957 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Mar 7 02:13:17.123611 containerd[1454]: time="2026-03-07T02:13:17.123566451Z" level=info msg="StartContainer for \"df88937dd50c8fc8b23bd01fbf857beaf01f100350e5592bf97b7891443a5afb\" returns successfully" Mar 7 02:13:17.141917 containerd[1454]: time="2026-03-07T02:13:17.141840618Z" level=info msg="StartContainer for \"8dfd7caad594a0a6447cadb900c9143013da5de7563517ece6ea8e0f8e39ae72\" returns successfully" Mar 7 02:13:17.151903 containerd[1454]: time="2026-03-07T02:13:17.151385865Z" level=info msg="StartContainer for \"18ec6c6c56a7b32d6a4447f80bb1884cdbc619c79541eb3a65395084c83ab880\" returns successfully" Mar 7 02:13:17.273586 kubelet[2130]: I0307 02:13:17.272869 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:13:17.730949 kubelet[2130]: E0307 02:13:17.730884 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:17.731316 kubelet[2130]: E0307 02:13:17.731025 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:17.733771 kubelet[2130]: E0307 02:13:17.733712 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:17.733907 kubelet[2130]: E0307 02:13:17.733868 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:17.735521 kubelet[2130]: E0307 02:13:17.735461 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:17.735707 kubelet[2130]: E0307 02:13:17.735657 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:18.397946 kubelet[2130]: I0307 02:13:18.397756 2130 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:13:18.397946 kubelet[2130]: E0307 02:13:18.397798 2130 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 02:13:18.413567 kubelet[2130]: E0307 02:13:18.413454 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:18.513708 kubelet[2130]: E0307 02:13:18.513653 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:18.614923 kubelet[2130]: E0307 02:13:18.614733 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:18.715241 kubelet[2130]: E0307 02:13:18.714988 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:18.739220 kubelet[2130]: E0307 02:13:18.739168 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:18.739843 kubelet[2130]: E0307 02:13:18.739327 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:18.739843 kubelet[2130]: E0307 02:13:18.739662 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:18.739919 kubelet[2130]: E0307 02:13:18.739856 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:18.816171 kubelet[2130]: E0307 02:13:18.816057 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:18.916376 kubelet[2130]: E0307 02:13:18.916255 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.017631 kubelet[2130]: E0307 02:13:19.017226 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.118208 kubelet[2130]: E0307 02:13:19.118145 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.219025 kubelet[2130]: E0307 02:13:19.218939 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.320041 kubelet[2130]: E0307 02:13:19.319795 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.378452 kubelet[2130]: E0307 02:13:19.378391 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:13:19.381203 kubelet[2130]: E0307 02:13:19.381163 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:19.420750 kubelet[2130]: E0307 02:13:19.420634 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.521611 kubelet[2130]: E0307 02:13:19.521427 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.622852 kubelet[2130]: E0307 02:13:19.622606 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.724898 kubelet[2130]: E0307 02:13:19.724680 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.826063 kubelet[2130]: E0307 02:13:19.825877 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:19.927005 kubelet[2130]: E0307 02:13:19.926909 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:20.027147 kubelet[2130]: E0307 02:13:20.027060 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:20.127845 kubelet[2130]: E0307 02:13:20.127778 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:13:20.196906 kubelet[2130]: I0307 02:13:20.196746 2130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:20.208461 kubelet[2130]: I0307 02:13:20.208391 2130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:20.216915 kubelet[2130]: I0307 02:13:20.216790 2130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:20.685102 kubelet[2130]: I0307 02:13:20.685012 2130 apiserver.go:52] "Watching apiserver" Mar 7 02:13:20.691238 kubelet[2130]: E0307 02:13:20.690879 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:20.691238 kubelet[2130]: E0307 02:13:20.691151 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:20.691238 kubelet[2130]: E0307 02:13:20.691182 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:20.696591 kubelet[2130]: I0307 02:13:20.696448 2130 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:13:20.857098 systemd[1]: Reloading requested from client PID 2419 ('systemctl') (unit session-7.scope)... Mar 7 02:13:20.857291 systemd[1]: Reloading... Mar 7 02:13:20.968598 zram_generator::config[2464]: No configuration found. Mar 7 02:13:21.106386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:13:21.230120 systemd[1]: Reloading finished in 372 ms. Mar 7 02:13:21.292095 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:13:21.320583 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:13:21.320987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:21.333003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:13:21.523211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:13:21.532113 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:13:21.620640 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:13:21.628258 kubelet[2503]: I0307 02:13:21.628176 2503 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:13:21.628258 kubelet[2503]: I0307 02:13:21.628233 2503 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:13:21.628258 kubelet[2503]: I0307 02:13:21.628251 2503 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:13:21.628258 kubelet[2503]: I0307 02:13:21.628256 2503 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:13:21.628575 kubelet[2503]: I0307 02:13:21.628452 2503 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:13:21.629670 kubelet[2503]: I0307 02:13:21.629620 2503 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 02:13:21.631759 kubelet[2503]: I0307 02:13:21.631586 2503 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:13:21.636607 kubelet[2503]: E0307 02:13:21.636562 2503 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:13:21.636855 kubelet[2503]: I0307 02:13:21.636807 2503 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:13:21.644987 kubelet[2503]: I0307 02:13:21.644890 2503 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:13:21.645334 kubelet[2503]: I0307 02:13:21.645252 2503 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:13:21.645624 kubelet[2503]: I0307 02:13:21.645312 2503 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:13:21.645624 kubelet[2503]: I0307 02:13:21.645612 2503 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:13:21.645624 kubelet[2503]: I0307 02:13:21.645625 2503 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:13:21.645932 kubelet[2503]: I0307 02:13:21.645658 2503 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:13:21.645978 kubelet[2503]: I0307 02:13:21.645933 2503 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:13:21.646310 kubelet[2503]: I0307 02:13:21.646246 2503 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:13:21.646310 kubelet[2503]: I0307 02:13:21.646288 2503 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:13:21.646310 kubelet[2503]: I0307 02:13:21.646312 2503 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:13:21.646428 kubelet[2503]: I0307 02:13:21.646326 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:13:21.648027 kubelet[2503]: I0307 02:13:21.647840 2503 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:13:21.649051 kubelet[2503]: I0307 02:13:21.648962 2503 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:13:21.649051 kubelet[2503]: I0307 02:13:21.649022 2503 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:13:21.654077 kubelet[2503]: I0307 02:13:21.654023 2503 server.go:1257] "Started kubelet" Mar 7 02:13:21.660407 kubelet[2503]: I0307 02:13:21.659105 2503 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:13:21.660407 kubelet[2503]: I0307 02:13:21.659262 2503 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:13:21.660407 kubelet[2503]: I0307 02:13:21.659677 2503 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:13:21.660407 kubelet[2503]: I0307 02:13:21.659759 2503 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:13:21.660407 kubelet[2503]: I0307 02:13:21.659834 2503 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:13:21.663764 kubelet[2503]: I0307 02:13:21.663736 2503 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:13:21.666937 kubelet[2503]: I0307 02:13:21.666125 2503 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:13:21.674269 kubelet[2503]: I0307 02:13:21.674187 2503 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:13:21.675852 kubelet[2503]: I0307 02:13:21.675727 2503 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:13:21.676897 kubelet[2503]: I0307 02:13:21.676786 2503 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:13:21.677202 kubelet[2503]: I0307 02:13:21.677161 2503 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:13:21.677401 kubelet[2503]: I0307 02:13:21.677314 2503 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:13:21.682800 kubelet[2503]: I0307 02:13:21.682772 2503 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:13:21.683616 kubelet[2503]: E0307 02:13:21.683559 2503 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:13:21.688126 kubelet[2503]: I0307 02:13:21.688067 2503 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:13:21.690030 kubelet[2503]: I0307 02:13:21.689970 2503 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:13:21.690030 kubelet[2503]: I0307 02:13:21.690005 2503 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:13:21.690030 kubelet[2503]: I0307 02:13:21.690027 2503 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:13:21.690167 kubelet[2503]: E0307 02:13:21.690077 2503 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:13:21.733650 kubelet[2503]: I0307 02:13:21.733584 2503 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:13:21.733650 kubelet[2503]: I0307 02:13:21.733624 2503 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:13:21.733650 kubelet[2503]: I0307 02:13:21.733647 2503 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:13:21.733877 kubelet[2503]: I0307 02:13:21.733847 2503 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 02:13:21.733906 kubelet[2503]: I0307 02:13:21.733877 2503 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 02:13:21.733906 kubelet[2503]: I0307 02:13:21.733902 2503 policy_none.go:50] "Start" Mar 7 02:13:21.733955 kubelet[2503]: I0307 02:13:21.733912 2503 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:13:21.733955 kubelet[2503]: I0307 02:13:21.733925 2503 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:13:21.734108 kubelet[2503]: I0307 02:13:21.734083 2503 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 02:13:21.734165 kubelet[2503]: I0307 02:13:21.734112 2503 policy_none.go:44] "Start" Mar 7 02:13:21.742605 kubelet[2503]: E0307 02:13:21.742554 2503 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:13:21.743166 kubelet[2503]: I0307 02:13:21.743105 2503 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:13:21.743166 kubelet[2503]: I0307 02:13:21.743141 2503 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:13:21.744638 kubelet[2503]: I0307 02:13:21.744603 2503 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:13:21.747685 kubelet[2503]: E0307 02:13:21.747611 2503 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:13:21.792368 kubelet[2503]: I0307 02:13:21.792079 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:21.793637 kubelet[2503]: I0307 02:13:21.793121 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:21.796278 kubelet[2503]: I0307 02:13:21.794781 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:21.802133 kubelet[2503]: E0307 02:13:21.801938 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:21.802133 kubelet[2503]: E0307 02:13:21.802085 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:21.802270 kubelet[2503]: E0307 02:13:21.802199 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:21.857393 kubelet[2503]: I0307 02:13:21.856670 2503 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:13:21.864391 kubelet[2503]: I0307 02:13:21.864368 2503 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 7 02:13:21.864615 kubelet[2503]: I0307 02:13:21.864430 2503 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:13:21.878979 kubelet[2503]: I0307 02:13:21.878891 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5dcb3623ce236345f85d5cd417da200-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5dcb3623ce236345f85d5cd417da200\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:21.878979 kubelet[2503]: I0307 02:13:21.878925 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5dcb3623ce236345f85d5cd417da200-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5dcb3623ce236345f85d5cd417da200\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:21.878979 kubelet[2503]: I0307 02:13:21.878952 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5dcb3623ce236345f85d5cd417da200-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5dcb3623ce236345f85d5cd417da200\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:21.878979 kubelet[2503]: I0307 02:13:21.878967 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:21.878979 kubelet[2503]: I0307 02:13:21.878979 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:21.879141 kubelet[2503]: I0307 02:13:21.878994 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:21.879141 kubelet[2503]: I0307 02:13:21.879005 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:21.879141 kubelet[2503]: I0307 02:13:21.879017 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:21.879141 kubelet[2503]: I0307 02:13:21.879029 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:13:22.102897 kubelet[2503]: E0307 02:13:22.102755 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:22.102897 kubelet[2503]: E0307 02:13:22.102755 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:22.103080 kubelet[2503]: E0307 02:13:22.103013 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:22.647437 kubelet[2503]: I0307 02:13:22.647388 2503 apiserver.go:52] "Watching apiserver" Mar 7 02:13:22.676306 kubelet[2503]: I0307 02:13:22.676252 2503 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:13:22.712124 kubelet[2503]: I0307 02:13:22.712057 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:22.713254 kubelet[2503]: E0307 02:13:22.713222 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:22.713357 kubelet[2503]: I0307 02:13:22.713294 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:22.724358 kubelet[2503]: E0307 02:13:22.724236 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 02:13:22.724661 kubelet[2503]: E0307 02:13:22.724612 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:22.726738 kubelet[2503]: E0307 02:13:22.726708 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 02:13:22.726988 kubelet[2503]: E0307 02:13:22.726961 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:23.228917 kubelet[2503]: I0307 02:13:23.228846 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.228831953 podStartE2EDuration="3.228831953s" podCreationTimestamp="2026-03-07 02:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:13:23.221566675 +0000 UTC m=+1.681859663" watchObservedRunningTime="2026-03-07 02:13:23.228831953 +0000 UTC m=+1.689124950" Mar 7 02:13:23.236396 kubelet[2503]: I0307 02:13:23.236355 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.236350374 podStartE2EDuration="3.236350374s" podCreationTimestamp="2026-03-07 02:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:13:23.229080056 +0000 UTC m=+1.689373053" watchObservedRunningTime="2026-03-07 02:13:23.236350374 +0000 UTC m=+1.696643361" Mar 7 02:13:23.715173 kubelet[2503]: E0307 02:13:23.715074 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:23.715763 kubelet[2503]: E0307 02:13:23.715363 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:23.715763 kubelet[2503]: E0307 02:13:23.715654 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:23.731150 kubelet[2503]: I0307 02:13:23.731010 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.7309931240000003 podStartE2EDuration="3.730993124s" podCreationTimestamp="2026-03-07 02:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:13:23.236567845 +0000 UTC m=+1.696860842" watchObservedRunningTime="2026-03-07 02:13:23.730993124 +0000 UTC m=+2.191286121" Mar 7 02:13:24.719723 kubelet[2503]: E0307 02:13:24.717256 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:25.092656 kubelet[2503]: E0307 02:13:25.092316 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:26.384252 kubelet[2503]: E0307 02:13:26.384214 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:27.448540 kubelet[2503]: I0307 02:13:27.448440 2503 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 02:13:27.448903 containerd[1454]: time="2026-03-07T02:13:27.448818699Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 02:13:27.449142 kubelet[2503]: I0307 02:13:27.448989 2503 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 02:13:28.162432 systemd[1]: Created slice kubepods-besteffort-pod4714af65_0564_461b_b54e_3075c6c2c3f0.slice - libcontainer container kubepods-besteffort-pod4714af65_0564_461b_b54e_3075c6c2c3f0.slice. Mar 7 02:13:28.220051 kubelet[2503]: I0307 02:13:28.219962 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4714af65-0564-461b-b54e-3075c6c2c3f0-kube-proxy\") pod \"kube-proxy-z68js\" (UID: \"4714af65-0564-461b-b54e-3075c6c2c3f0\") " pod="kube-system/kube-proxy-z68js" Mar 7 02:13:28.220051 kubelet[2503]: I0307 02:13:28.220012 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4714af65-0564-461b-b54e-3075c6c2c3f0-xtables-lock\") pod \"kube-proxy-z68js\" (UID: \"4714af65-0564-461b-b54e-3075c6c2c3f0\") " pod="kube-system/kube-proxy-z68js" Mar 7 02:13:28.220213 kubelet[2503]: I0307 02:13:28.220075 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4714af65-0564-461b-b54e-3075c6c2c3f0-lib-modules\") pod \"kube-proxy-z68js\" (UID: \"4714af65-0564-461b-b54e-3075c6c2c3f0\") " pod="kube-system/kube-proxy-z68js" Mar 7 02:13:28.220213 kubelet[2503]: I0307 02:13:28.220114 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrjr\" (UniqueName: \"kubernetes.io/projected/4714af65-0564-461b-b54e-3075c6c2c3f0-kube-api-access-fwrjr\") pod \"kube-proxy-z68js\" (UID: \"4714af65-0564-461b-b54e-3075c6c2c3f0\") " pod="kube-system/kube-proxy-z68js" Mar 7 02:13:28.324648 kubelet[2503]: E0307 02:13:28.324593 2503 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 02:13:28.324648 kubelet[2503]: E0307 02:13:28.324632 2503 projected.go:196] Error preparing data for projected volume kube-api-access-fwrjr for pod kube-system/kube-proxy-z68js: configmap "kube-root-ca.crt" not found Mar 7 02:13:28.324782 kubelet[2503]: E0307 02:13:28.324694 2503 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4714af65-0564-461b-b54e-3075c6c2c3f0-kube-api-access-fwrjr podName:4714af65-0564-461b-b54e-3075c6c2c3f0 nodeName:}" failed. No retries permitted until 2026-03-07 02:13:28.824676366 +0000 UTC m=+7.284969354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fwrjr" (UniqueName: "kubernetes.io/projected/4714af65-0564-461b-b54e-3075c6c2c3f0-kube-api-access-fwrjr") pod "kube-proxy-z68js" (UID: "4714af65-0564-461b-b54e-3075c6c2c3f0") : configmap "kube-root-ca.crt" not found Mar 7 02:13:28.762189 systemd[1]: Created slice kubepods-besteffort-podd2d021aa_c787_4a73_abc8_3010c0470601.slice - libcontainer container kubepods-besteffort-podd2d021aa_c787_4a73_abc8_3010c0470601.slice. Mar 7 02:13:28.825408 kubelet[2503]: I0307 02:13:28.825339 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d2d021aa-c787-4a73-abc8-3010c0470601-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-v4gcv\" (UID: \"d2d021aa-c787-4a73-abc8-3010c0470601\") " pod="tigera-operator/tigera-operator-6cf4cccc57-v4gcv" Mar 7 02:13:28.825408 kubelet[2503]: I0307 02:13:28.825384 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw555\" (UniqueName: \"kubernetes.io/projected/d2d021aa-c787-4a73-abc8-3010c0470601-kube-api-access-qw555\") pod \"tigera-operator-6cf4cccc57-v4gcv\" (UID: \"d2d021aa-c787-4a73-abc8-3010c0470601\") " pod="tigera-operator/tigera-operator-6cf4cccc57-v4gcv" Mar 7 02:13:29.068849 containerd[1454]: time="2026-03-07T02:13:29.068720220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-v4gcv,Uid:d2d021aa-c787-4a73-abc8-3010c0470601,Namespace:tigera-operator,Attempt:0,}" Mar 7 02:13:29.076592 kubelet[2503]: E0307 02:13:29.076559 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:29.077252 containerd[1454]: time="2026-03-07T02:13:29.077202537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z68js,Uid:4714af65-0564-461b-b54e-3075c6c2c3f0,Namespace:kube-system,Attempt:0,}" Mar 7 02:13:29.097245 containerd[1454]: time="2026-03-07T02:13:29.097075498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:29.097245 containerd[1454]: time="2026-03-07T02:13:29.097139608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:29.097245 containerd[1454]: time="2026-03-07T02:13:29.097161138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:29.097418 containerd[1454]: time="2026-03-07T02:13:29.097319764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:29.105268 containerd[1454]: time="2026-03-07T02:13:29.105047872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:29.105268 containerd[1454]: time="2026-03-07T02:13:29.105104538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:29.105268 containerd[1454]: time="2026-03-07T02:13:29.105115559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:29.105268 containerd[1454]: time="2026-03-07T02:13:29.105207981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:29.119671 systemd[1]: Started cri-containerd-1746db11a72c6b5b74bcff8f2681e76314bd99590dfbbd318cde9457da9c2819.scope - libcontainer container 1746db11a72c6b5b74bcff8f2681e76314bd99590dfbbd318cde9457da9c2819. Mar 7 02:13:29.122875 systemd[1]: Started cri-containerd-26c7c72eb0331a4ab3d655ea0bd746b69eb04f8b4729042671268fea9ecd0e59.scope - libcontainer container 26c7c72eb0331a4ab3d655ea0bd746b69eb04f8b4729042671268fea9ecd0e59. Mar 7 02:13:29.146649 containerd[1454]: time="2026-03-07T02:13:29.146580398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z68js,Uid:4714af65-0564-461b-b54e-3075c6c2c3f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c7c72eb0331a4ab3d655ea0bd746b69eb04f8b4729042671268fea9ecd0e59\"" Mar 7 02:13:29.147403 kubelet[2503]: E0307 02:13:29.147384 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:29.158262 containerd[1454]: time="2026-03-07T02:13:29.157184834Z" level=info msg="CreateContainer within sandbox \"26c7c72eb0331a4ab3d655ea0bd746b69eb04f8b4729042671268fea9ecd0e59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 02:13:29.169030 containerd[1454]: time="2026-03-07T02:13:29.168930617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-v4gcv,Uid:d2d021aa-c787-4a73-abc8-3010c0470601,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1746db11a72c6b5b74bcff8f2681e76314bd99590dfbbd318cde9457da9c2819\"" Mar 7 02:13:29.171014 containerd[1454]: time="2026-03-07T02:13:29.170953940Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 02:13:29.180378 containerd[1454]: time="2026-03-07T02:13:29.180309387Z" level=info msg="CreateContainer within sandbox \"26c7c72eb0331a4ab3d655ea0bd746b69eb04f8b4729042671268fea9ecd0e59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11435fd0ebe78e52e6ea706433eff3c6f00a899b5f24bfcfe82bc83400511616\"" Mar 7 02:13:29.181751 containerd[1454]: time="2026-03-07T02:13:29.181708089Z" level=info msg="StartContainer for \"11435fd0ebe78e52e6ea706433eff3c6f00a899b5f24bfcfe82bc83400511616\"" Mar 7 02:13:29.213665 systemd[1]: Started cri-containerd-11435fd0ebe78e52e6ea706433eff3c6f00a899b5f24bfcfe82bc83400511616.scope - libcontainer container 11435fd0ebe78e52e6ea706433eff3c6f00a899b5f24bfcfe82bc83400511616. Mar 7 02:13:29.247336 containerd[1454]: time="2026-03-07T02:13:29.247276484Z" level=info msg="StartContainer for \"11435fd0ebe78e52e6ea706433eff3c6f00a899b5f24bfcfe82bc83400511616\" returns successfully" Mar 7 02:13:29.725120 kubelet[2503]: E0307 02:13:29.725052 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:29.877616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250032509.mount: Deactivated successfully. Mar 7 02:13:30.565493 containerd[1454]: time="2026-03-07T02:13:30.565416054Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:30.566569 containerd[1454]: time="2026-03-07T02:13:30.566525083Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 02:13:30.567897 containerd[1454]: time="2026-03-07T02:13:30.567841579Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:30.570334 containerd[1454]: time="2026-03-07T02:13:30.570280003Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:30.570987 containerd[1454]: time="2026-03-07T02:13:30.570937376Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.399935576s" Mar 7 02:13:30.570987 containerd[1454]: time="2026-03-07T02:13:30.570976509Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 02:13:30.574939 containerd[1454]: time="2026-03-07T02:13:30.574888367Z" level=info msg="CreateContainer within sandbox \"1746db11a72c6b5b74bcff8f2681e76314bd99590dfbbd318cde9457da9c2819\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 02:13:30.588176 containerd[1454]: time="2026-03-07T02:13:30.588100057Z" level=info msg="CreateContainer within sandbox \"1746db11a72c6b5b74bcff8f2681e76314bd99590dfbbd318cde9457da9c2819\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5cbae8f22d4751e994af32c3377cfe376fd9ecba5c0458399ce100531e75e45b\"" Mar 7 02:13:30.588714 containerd[1454]: time="2026-03-07T02:13:30.588678757Z" level=info msg="StartContainer for \"5cbae8f22d4751e994af32c3377cfe376fd9ecba5c0458399ce100531e75e45b\"" Mar 7 02:13:30.621678 systemd[1]: Started cri-containerd-5cbae8f22d4751e994af32c3377cfe376fd9ecba5c0458399ce100531e75e45b.scope - libcontainer container 5cbae8f22d4751e994af32c3377cfe376fd9ecba5c0458399ce100531e75e45b. Mar 7 02:13:30.647106 containerd[1454]: time="2026-03-07T02:13:30.647040836Z" level=info msg="StartContainer for \"5cbae8f22d4751e994af32c3377cfe376fd9ecba5c0458399ce100531e75e45b\" returns successfully" Mar 7 02:13:30.740036 kubelet[2503]: I0307 02:13:30.739908 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-z68js" podStartSLOduration=2.739896372 podStartE2EDuration="2.739896372s" podCreationTimestamp="2026-03-07 02:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:13:29.736844999 +0000 UTC m=+8.197138006" watchObservedRunningTime="2026-03-07 02:13:30.739896372 +0000 UTC m=+9.200189359" Mar 7 02:13:30.740036 kubelet[2503]: I0307 02:13:30.740000 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-v4gcv" podStartSLOduration=1.338991431 podStartE2EDuration="2.739996839s" podCreationTimestamp="2026-03-07 02:13:28 +0000 UTC" firstStartedPulling="2026-03-07 02:13:29.170571731 +0000 UTC m=+7.630864728" lastFinishedPulling="2026-03-07 02:13:30.571577149 +0000 UTC m=+9.031870136" observedRunningTime="2026-03-07 02:13:30.739772611 +0000 UTC m=+9.200065598" watchObservedRunningTime="2026-03-07 02:13:30.739996839 +0000 UTC m=+9.200289837" Mar 7 02:13:33.211049 kubelet[2503]: E0307 02:13:33.210988 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:35.097660 kubelet[2503]: E0307 02:13:35.097624 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:35.752273 sudo[1631]: pam_unix(sudo:session): session closed for user root Mar 7 02:13:35.756838 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:35.761112 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 7 02:13:35.762146 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:58288.service: Deactivated successfully. Mar 7 02:13:35.767914 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 02:13:35.771163 systemd[1]: session-7.scope: Consumed 4.195s CPU time, 156.8M memory peak, 0B memory swap peak. Mar 7 02:13:35.776688 systemd-logind[1442]: Removed session 7. Mar 7 02:13:36.389850 kubelet[2503]: E0307 02:13:36.389663 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:37.371997 systemd[1]: Created slice kubepods-besteffort-pod56252858_8b96_4537_850e_5e31255ab75e.slice - libcontainer container kubepods-besteffort-pod56252858_8b96_4537_850e_5e31255ab75e.slice. Mar 7 02:13:37.380864 kubelet[2503]: I0307 02:13:37.380777 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56252858-8b96-4537-850e-5e31255ab75e-tigera-ca-bundle\") pod \"calico-typha-8b65b9975-r98h6\" (UID: \"56252858-8b96-4537-850e-5e31255ab75e\") " pod="calico-system/calico-typha-8b65b9975-r98h6" Mar 7 02:13:37.380864 kubelet[2503]: I0307 02:13:37.380830 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdqm\" (UniqueName: \"kubernetes.io/projected/56252858-8b96-4537-850e-5e31255ab75e-kube-api-access-zrdqm\") pod \"calico-typha-8b65b9975-r98h6\" (UID: \"56252858-8b96-4537-850e-5e31255ab75e\") " pod="calico-system/calico-typha-8b65b9975-r98h6" Mar 7 02:13:37.381016 kubelet[2503]: I0307 02:13:37.380914 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56252858-8b96-4537-850e-5e31255ab75e-typha-certs\") pod \"calico-typha-8b65b9975-r98h6\" (UID: \"56252858-8b96-4537-850e-5e31255ab75e\") " pod="calico-system/calico-typha-8b65b9975-r98h6" Mar 7 02:13:37.414207 systemd[1]: Created slice kubepods-besteffort-podb7f0a4d9_ecb4_4655_9b78_23246bd4b460.slice - libcontainer container kubepods-besteffort-podb7f0a4d9_ecb4_4655_9b78_23246bd4b460.slice. Mar 7 02:13:37.481967 kubelet[2503]: I0307 02:13:37.481830 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-cni-bin-dir\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.481967 kubelet[2503]: I0307 02:13:37.481873 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-cni-net-dir\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.481967 kubelet[2503]: I0307 02:13:37.481888 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-cni-log-dir\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.481967 kubelet[2503]: I0307 02:13:37.481901 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-var-lib-calico\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.481967 kubelet[2503]: I0307 02:13:37.481918 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-sys-fs\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482604 kubelet[2503]: I0307 02:13:37.481931 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-var-run-calico\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482604 kubelet[2503]: I0307 02:13:37.481943 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-bpffs\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482604 kubelet[2503]: I0307 02:13:37.481954 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-nodeproc\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482604 kubelet[2503]: I0307 02:13:37.481967 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-lib-modules\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482604 kubelet[2503]: I0307 02:13:37.481980 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-flexvol-driver-host\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482731 kubelet[2503]: I0307 02:13:37.482040 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-xtables-lock\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482731 kubelet[2503]: I0307 02:13:37.482073 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-node-certs\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482731 kubelet[2503]: I0307 02:13:37.482089 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qpv\" (UniqueName: \"kubernetes.io/projected/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-kube-api-access-f6qpv\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482731 kubelet[2503]: I0307 02:13:37.482125 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-policysync\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.482731 kubelet[2503]: I0307 02:13:37.482639 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f0a4d9-ecb4-4655-9b78-23246bd4b460-tigera-ca-bundle\") pod \"calico-node-s4qqx\" (UID: \"b7f0a4d9-ecb4-4655-9b78-23246bd4b460\") " pod="calico-system/calico-node-s4qqx" Mar 7 02:13:37.521559 kubelet[2503]: E0307 02:13:37.521479 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:37.585468 kubelet[2503]: I0307 02:13:37.583857 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4a09e6a6-ad07-4660-ad76-3cd9ebaad755-registration-dir\") pod \"csi-node-driver-v2tvt\" (UID: \"4a09e6a6-ad07-4660-ad76-3cd9ebaad755\") " pod="calico-system/csi-node-driver-v2tvt" Mar 7 02:13:37.585468 kubelet[2503]: I0307 02:13:37.583893 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4a09e6a6-ad07-4660-ad76-3cd9ebaad755-socket-dir\") pod \"csi-node-driver-v2tvt\" (UID: \"4a09e6a6-ad07-4660-ad76-3cd9ebaad755\") " pod="calico-system/csi-node-driver-v2tvt" Mar 7 02:13:37.585468 kubelet[2503]: I0307 02:13:37.583909 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4a09e6a6-ad07-4660-ad76-3cd9ebaad755-varrun\") pod \"csi-node-driver-v2tvt\" (UID: \"4a09e6a6-ad07-4660-ad76-3cd9ebaad755\") " pod="calico-system/csi-node-driver-v2tvt" Mar 7 02:13:37.585468 kubelet[2503]: I0307 02:13:37.583980 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a09e6a6-ad07-4660-ad76-3cd9ebaad755-kubelet-dir\") pod \"csi-node-driver-v2tvt\" (UID: \"4a09e6a6-ad07-4660-ad76-3cd9ebaad755\") " pod="calico-system/csi-node-driver-v2tvt" Mar 7 02:13:37.585468 kubelet[2503]: I0307 02:13:37.584045 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjwbq\" (UniqueName: \"kubernetes.io/projected/4a09e6a6-ad07-4660-ad76-3cd9ebaad755-kube-api-access-tjwbq\") pod \"csi-node-driver-v2tvt\" (UID: \"4a09e6a6-ad07-4660-ad76-3cd9ebaad755\") " pod="calico-system/csi-node-driver-v2tvt" Mar 7 02:13:37.603757 kubelet[2503]: E0307 02:13:37.603689 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.603903 kubelet[2503]: W0307 02:13:37.603886 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.604149 kubelet[2503]: E0307 02:13:37.604134 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.605164 kubelet[2503]: E0307 02:13:37.605152 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.605265 kubelet[2503]: W0307 02:13:37.605252 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.605327 kubelet[2503]: E0307 02:13:37.605315 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.685160 kubelet[2503]: E0307 02:13:37.685128 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.685160 kubelet[2503]: W0307 02:13:37.685154 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.685293 kubelet[2503]: E0307 02:13:37.685172 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.685631 kubelet[2503]: E0307 02:13:37.685605 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.685631 kubelet[2503]: W0307 02:13:37.685627 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.685733 kubelet[2503]: E0307 02:13:37.685640 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.685992 kubelet[2503]: E0307 02:13:37.685950 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.685992 kubelet[2503]: W0307 02:13:37.685972 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.685992 kubelet[2503]: E0307 02:13:37.685982 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.686324 kubelet[2503]: E0307 02:13:37.686290 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.686324 kubelet[2503]: W0307 02:13:37.686312 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.686324 kubelet[2503]: E0307 02:13:37.686322 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.686660 kubelet[2503]: E0307 02:13:37.686640 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.686660 kubelet[2503]: W0307 02:13:37.686658 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.686729 kubelet[2503]: E0307 02:13:37.686666 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.686946 kubelet[2503]: E0307 02:13:37.686927 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.686946 kubelet[2503]: W0307 02:13:37.686944 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.687002 kubelet[2503]: E0307 02:13:37.686952 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.687230 kubelet[2503]: E0307 02:13:37.687210 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.687230 kubelet[2503]: W0307 02:13:37.687227 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.687283 kubelet[2503]: E0307 02:13:37.687237 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.687583 kubelet[2503]: E0307 02:13:37.687562 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.687583 kubelet[2503]: W0307 02:13:37.687580 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.687645 kubelet[2503]: E0307 02:13:37.687588 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.687862 kubelet[2503]: E0307 02:13:37.687840 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.687862 kubelet[2503]: W0307 02:13:37.687856 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.687907 kubelet[2503]: E0307 02:13:37.687865 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.688173 kubelet[2503]: E0307 02:13:37.688133 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.688173 kubelet[2503]: W0307 02:13:37.688153 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.688173 kubelet[2503]: E0307 02:13:37.688161 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.688473 kubelet[2503]: E0307 02:13:37.688401 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.688473 kubelet[2503]: W0307 02:13:37.688421 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.688473 kubelet[2503]: E0307 02:13:37.688451 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.688757 kubelet[2503]: E0307 02:13:37.688735 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.688757 kubelet[2503]: W0307 02:13:37.688752 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.688806 kubelet[2503]: E0307 02:13:37.688760 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.689022 kubelet[2503]: E0307 02:13:37.689001 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.689022 kubelet[2503]: W0307 02:13:37.689017 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.689065 kubelet[2503]: E0307 02:13:37.689025 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.689288 kubelet[2503]: E0307 02:13:37.689265 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.689288 kubelet[2503]: W0307 02:13:37.689282 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.689336 kubelet[2503]: E0307 02:13:37.689290 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.689599 kubelet[2503]: E0307 02:13:37.689579 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.689599 kubelet[2503]: W0307 02:13:37.689596 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.689648 kubelet[2503]: E0307 02:13:37.689604 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.689908 kubelet[2503]: E0307 02:13:37.689877 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.689908 kubelet[2503]: W0307 02:13:37.689895 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.689908 kubelet[2503]: E0307 02:13:37.689903 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.690183 kubelet[2503]: E0307 02:13:37.690145 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.690183 kubelet[2503]: W0307 02:13:37.690165 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.690183 kubelet[2503]: E0307 02:13:37.690173 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.690555 kubelet[2503]: E0307 02:13:37.690489 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.690555 kubelet[2503]: W0307 02:13:37.690545 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.690555 kubelet[2503]: E0307 02:13:37.690554 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.690870 kubelet[2503]: E0307 02:13:37.690850 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.690870 kubelet[2503]: W0307 02:13:37.690867 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.690924 kubelet[2503]: E0307 02:13:37.690875 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.691147 kubelet[2503]: E0307 02:13:37.691125 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.691147 kubelet[2503]: W0307 02:13:37.691145 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.691239 kubelet[2503]: E0307 02:13:37.691157 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.691559 kubelet[2503]: E0307 02:13:37.691468 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.691559 kubelet[2503]: W0307 02:13:37.691481 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.691559 kubelet[2503]: E0307 02:13:37.691490 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.691930 kubelet[2503]: E0307 02:13:37.691900 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.691968 kubelet[2503]: W0307 02:13:37.691931 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.691968 kubelet[2503]: E0307 02:13:37.691948 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.692418 kubelet[2503]: E0307 02:13:37.692315 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.692418 kubelet[2503]: W0307 02:13:37.692339 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.692418 kubelet[2503]: E0307 02:13:37.692353 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.692744 kubelet[2503]: E0307 02:13:37.692658 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.692744 kubelet[2503]: W0307 02:13:37.692667 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.692744 kubelet[2503]: E0307 02:13:37.692676 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.702186 kubelet[2503]: E0307 02:13:37.702153 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.702186 kubelet[2503]: W0307 02:13:37.702178 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.702341 kubelet[2503]: E0307 02:13:37.702189 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.747407 kubelet[2503]: E0307 02:13:37.747344 2503 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:13:37.747407 kubelet[2503]: W0307 02:13:37.747367 2503 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:13:37.747407 kubelet[2503]: E0307 02:13:37.747379 2503 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:13:37.748288 kubelet[2503]: E0307 02:13:37.748261 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:37.748979 containerd[1454]: time="2026-03-07T02:13:37.748883202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b65b9975-r98h6,Uid:56252858-8b96-4537-850e-5e31255ab75e,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:37.751721 containerd[1454]: time="2026-03-07T02:13:37.751424780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s4qqx,Uid:b7f0a4d9-ecb4-4655-9b78-23246bd4b460,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:37.778971 containerd[1454]: time="2026-03-07T02:13:37.778883556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:37.778971 containerd[1454]: time="2026-03-07T02:13:37.778932047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:37.778971 containerd[1454]: time="2026-03-07T02:13:37.778941945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:37.779106 containerd[1454]: time="2026-03-07T02:13:37.779008570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:37.782096 containerd[1454]: time="2026-03-07T02:13:37.781945050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:37.782096 containerd[1454]: time="2026-03-07T02:13:37.782012715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:37.782096 containerd[1454]: time="2026-03-07T02:13:37.782026722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:37.782201 containerd[1454]: time="2026-03-07T02:13:37.782097223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:37.802698 systemd[1]: Started cri-containerd-f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961.scope - libcontainer container f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961. Mar 7 02:13:37.807143 systemd[1]: Started cri-containerd-7fa23fdcba273ddd981dcdd09dae29ab44f0a46c83f0578ccb859b61db6a1950.scope - libcontainer container 7fa23fdcba273ddd981dcdd09dae29ab44f0a46c83f0578ccb859b61db6a1950. Mar 7 02:13:37.832742 containerd[1454]: time="2026-03-07T02:13:37.832688004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s4qqx,Uid:b7f0a4d9-ecb4-4655-9b78-23246bd4b460,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\"" Mar 7 02:13:37.835961 containerd[1454]: time="2026-03-07T02:13:37.834986132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 02:13:37.850768 containerd[1454]: time="2026-03-07T02:13:37.850550964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b65b9975-r98h6,Uid:56252858-8b96-4537-850e-5e31255ab75e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fa23fdcba273ddd981dcdd09dae29ab44f0a46c83f0578ccb859b61db6a1950\"" Mar 7 02:13:37.851576 kubelet[2503]: E0307 02:13:37.851398 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:38.906964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419298728.mount: Deactivated successfully. Mar 7 02:13:38.973645 containerd[1454]: time="2026-03-07T02:13:38.973602552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:38.975060 containerd[1454]: time="2026-03-07T02:13:38.974658463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 7 02:13:38.975983 containerd[1454]: time="2026-03-07T02:13:38.975951095Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:38.978244 containerd[1454]: time="2026-03-07T02:13:38.978202156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:38.978848 containerd[1454]: time="2026-03-07T02:13:38.978816918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.143803865s" Mar 7 02:13:38.978923 containerd[1454]: time="2026-03-07T02:13:38.978850620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 02:13:38.979606 containerd[1454]: time="2026-03-07T02:13:38.979587263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 02:13:38.983415 containerd[1454]: time="2026-03-07T02:13:38.983383949Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 02:13:39.001722 containerd[1454]: time="2026-03-07T02:13:39.001636739Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08\"" Mar 7 02:13:39.002163 containerd[1454]: time="2026-03-07T02:13:39.002137576Z" level=info msg="StartContainer for \"7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08\"" Mar 7 02:13:39.031682 systemd[1]: Started cri-containerd-7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08.scope - libcontainer container 7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08. Mar 7 02:13:39.071150 systemd[1]: cri-containerd-7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08.scope: Deactivated successfully. Mar 7 02:13:39.074837 containerd[1454]: time="2026-03-07T02:13:39.074719131Z" level=info msg="StartContainer for \"7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08\" returns successfully" Mar 7 02:13:39.109383 containerd[1454]: time="2026-03-07T02:13:39.107348893Z" level=info msg="shim disconnected" id=7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08 namespace=k8s.io Mar 7 02:13:39.109383 containerd[1454]: time="2026-03-07T02:13:39.109339278Z" level=warning msg="cleaning up after shim disconnected" id=7708555cad0e4f295be978aef8dce0c10ac936b9f239dba51ce789f3cc944f08 namespace=k8s.io Mar 7 02:13:39.109383 containerd[1454]: time="2026-03-07T02:13:39.109351521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:13:39.690613 kubelet[2503]: E0307 02:13:39.690563 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:41.691413 kubelet[2503]: E0307 02:13:41.691280 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:41.698881 containerd[1454]: time="2026-03-07T02:13:41.698791019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:41.699849 containerd[1454]: time="2026-03-07T02:13:41.699790024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 7 02:13:41.701103 containerd[1454]: time="2026-03-07T02:13:41.701048302Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:41.703577 containerd[1454]: time="2026-03-07T02:13:41.703539564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:41.704232 containerd[1454]: time="2026-03-07T02:13:41.704174606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.72444615s" Mar 7 02:13:41.704232 containerd[1454]: time="2026-03-07T02:13:41.704217427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 02:13:41.706882 containerd[1454]: time="2026-03-07T02:13:41.706664064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 02:13:41.717739 containerd[1454]: time="2026-03-07T02:13:41.717654585Z" level=info msg="CreateContainer within sandbox \"7fa23fdcba273ddd981dcdd09dae29ab44f0a46c83f0578ccb859b61db6a1950\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 02:13:41.736140 containerd[1454]: time="2026-03-07T02:13:41.736077618Z" level=info msg="CreateContainer within sandbox \"7fa23fdcba273ddd981dcdd09dae29ab44f0a46c83f0578ccb859b61db6a1950\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b369528dd4938aafa62e46465fc42d7f801d0ee16175930d1287f867680f0785\"" Mar 7 02:13:41.737169 containerd[1454]: time="2026-03-07T02:13:41.736674683Z" level=info msg="StartContainer for \"b369528dd4938aafa62e46465fc42d7f801d0ee16175930d1287f867680f0785\"" Mar 7 02:13:41.771690 systemd[1]: Started cri-containerd-b369528dd4938aafa62e46465fc42d7f801d0ee16175930d1287f867680f0785.scope - libcontainer container b369528dd4938aafa62e46465fc42d7f801d0ee16175930d1287f867680f0785. Mar 7 02:13:41.819828 containerd[1454]: time="2026-03-07T02:13:41.819752353Z" level=info msg="StartContainer for \"b369528dd4938aafa62e46465fc42d7f801d0ee16175930d1287f867680f0785\" returns successfully" Mar 7 02:13:42.125154 update_engine[1448]: I20260307 02:13:42.124974 1448 update_attempter.cc:509] Updating boot flags... Mar 7 02:13:42.157576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3166) Mar 7 02:13:42.208255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3170) Mar 7 02:13:42.232784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3170) Mar 7 02:13:42.762741 kubelet[2503]: E0307 02:13:42.761983 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:42.772215 kubelet[2503]: I0307 02:13:42.771939 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-8b65b9975-r98h6" podStartSLOduration=1.919588406 podStartE2EDuration="5.771928582s" podCreationTimestamp="2026-03-07 02:13:37 +0000 UTC" firstStartedPulling="2026-03-07 02:13:37.852689236 +0000 UTC m=+16.312982233" lastFinishedPulling="2026-03-07 02:13:41.705029332 +0000 UTC m=+20.165322409" observedRunningTime="2026-03-07 02:13:42.771788877 +0000 UTC m=+21.232081864" watchObservedRunningTime="2026-03-07 02:13:42.771928582 +0000 UTC m=+21.232221569" Mar 7 02:13:43.693281 kubelet[2503]: E0307 02:13:43.693193 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:43.765196 kubelet[2503]: I0307 02:13:43.765146 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:13:43.765673 kubelet[2503]: E0307 02:13:43.765610 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:45.565549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814916023.mount: Deactivated successfully. Mar 7 02:13:45.618926 kubelet[2503]: I0307 02:13:45.618855 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:13:45.619590 kubelet[2503]: E0307 02:13:45.619493 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:45.691154 kubelet[2503]: E0307 02:13:45.690849 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:45.748096 containerd[1454]: time="2026-03-07T02:13:45.748004568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:45.748976 containerd[1454]: time="2026-03-07T02:13:45.748890519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 02:13:45.750295 containerd[1454]: time="2026-03-07T02:13:45.750241552Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:45.752439 containerd[1454]: time="2026-03-07T02:13:45.752388660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:45.753055 containerd[1454]: time="2026-03-07T02:13:45.752999234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.046252968s" Mar 7 02:13:45.753055 containerd[1454]: time="2026-03-07T02:13:45.753047083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 02:13:45.764723 containerd[1454]: time="2026-03-07T02:13:45.764689273Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 02:13:45.768208 kubelet[2503]: E0307 02:13:45.768147 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:45.826050 containerd[1454]: time="2026-03-07T02:13:45.825808747Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828\"" Mar 7 02:13:45.826973 containerd[1454]: time="2026-03-07T02:13:45.826782875Z" level=info msg="StartContainer for \"f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828\"" Mar 7 02:13:45.877682 systemd[1]: Started cri-containerd-f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828.scope - libcontainer container f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828. Mar 7 02:13:45.907722 containerd[1454]: time="2026-03-07T02:13:45.907648307Z" level=info msg="StartContainer for \"f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828\" returns successfully" Mar 7 02:13:45.960558 systemd[1]: cri-containerd-f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828.scope: Deactivated successfully. Mar 7 02:13:46.139727 containerd[1454]: time="2026-03-07T02:13:46.139660342Z" level=info msg="shim disconnected" id=f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828 namespace=k8s.io Mar 7 02:13:46.139727 containerd[1454]: time="2026-03-07T02:13:46.139716548Z" level=warning msg="cleaning up after shim disconnected" id=f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828 namespace=k8s.io Mar 7 02:13:46.139727 containerd[1454]: time="2026-03-07T02:13:46.139725905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:13:46.566036 systemd[1]: run-containerd-runc-k8s.io-f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828-runc.GZklWQ.mount: Deactivated successfully. Mar 7 02:13:46.566152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0b3c12a04149f093c579c896293371578f0305746d858ada01c738707c55828-rootfs.mount: Deactivated successfully. Mar 7 02:13:46.773058 containerd[1454]: time="2026-03-07T02:13:46.772894666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 02:13:47.691462 kubelet[2503]: E0307 02:13:47.691380 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:49.280363 containerd[1454]: time="2026-03-07T02:13:49.280293539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:49.281032 containerd[1454]: time="2026-03-07T02:13:49.281009138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 02:13:49.282282 containerd[1454]: time="2026-03-07T02:13:49.282230125Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:49.284774 containerd[1454]: time="2026-03-07T02:13:49.284732465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:49.285850 containerd[1454]: time="2026-03-07T02:13:49.285778461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.512849882s" Mar 7 02:13:49.285915 containerd[1454]: time="2026-03-07T02:13:49.285883056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 02:13:49.290496 containerd[1454]: time="2026-03-07T02:13:49.290454249Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 02:13:49.311389 containerd[1454]: time="2026-03-07T02:13:49.311319933Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f\"" Mar 7 02:13:49.311930 containerd[1454]: time="2026-03-07T02:13:49.311867598Z" level=info msg="StartContainer for \"751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f\"" Mar 7 02:13:49.374691 systemd[1]: Started cri-containerd-751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f.scope - libcontainer container 751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f. Mar 7 02:13:49.406834 containerd[1454]: time="2026-03-07T02:13:49.406791906Z" level=info msg="StartContainer for \"751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f\" returns successfully" Mar 7 02:13:49.690789 kubelet[2503]: E0307 02:13:49.690705 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2tvt" podUID="4a09e6a6-ad07-4660-ad76-3cd9ebaad755" Mar 7 02:13:49.944743 systemd[1]: cri-containerd-751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f.scope: Deactivated successfully. Mar 7 02:13:49.965087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f-rootfs.mount: Deactivated successfully. Mar 7 02:13:49.971291 containerd[1454]: time="2026-03-07T02:13:49.971239594Z" level=info msg="shim disconnected" id=751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f namespace=k8s.io Mar 7 02:13:49.971465 containerd[1454]: time="2026-03-07T02:13:49.971291711Z" level=warning msg="cleaning up after shim disconnected" id=751206f0a7ebaea141a5a41903cb096599d4597116cd1f7b1527ebb9e77aa83f namespace=k8s.io Mar 7 02:13:49.971465 containerd[1454]: time="2026-03-07T02:13:49.971300859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:13:49.972855 kubelet[2503]: I0307 02:13:49.972821 2503 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 02:13:49.988868 containerd[1454]: time="2026-03-07T02:13:49.988809990Z" level=warning msg="cleanup warnings time=\"2026-03-07T02:13:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 02:13:50.014293 systemd[1]: Created slice kubepods-besteffort-pod676a8c94_335a_4977_b910_64f7a6bc8f5e.slice - libcontainer container kubepods-besteffort-pod676a8c94_335a_4977_b910_64f7a6bc8f5e.slice. Mar 7 02:13:50.021997 systemd[1]: Created slice kubepods-burstable-pod49673b6d_eace_4732_9c08_550044a6a02f.slice - libcontainer container kubepods-burstable-pod49673b6d_eace_4732_9c08_550044a6a02f.slice. Mar 7 02:13:50.031182 systemd[1]: Created slice kubepods-besteffort-pod0557d5f8_dfa6_4ac0_b2cb_c8ef999934ab.slice - libcontainer container kubepods-besteffort-pod0557d5f8_dfa6_4ac0_b2cb_c8ef999934ab.slice. Mar 7 02:13:50.038845 systemd[1]: Created slice kubepods-besteffort-pod400859bd_9f1f_404b_b164_62fa2410895c.slice - libcontainer container kubepods-besteffort-pod400859bd_9f1f_404b_b164_62fa2410895c.slice. Mar 7 02:13:50.045226 systemd[1]: Created slice kubepods-besteffort-pod9a48fdbb_1672_4738_9d0e_540e33f8a579.slice - libcontainer container kubepods-besteffort-pod9a48fdbb_1672_4738_9d0e_540e33f8a579.slice. Mar 7 02:13:50.052040 systemd[1]: Created slice kubepods-besteffort-pod746832be_1a83_49f8_83ca_d151d465a357.slice - libcontainer container kubepods-besteffort-pod746832be_1a83_49f8_83ca_d151d465a357.slice. Mar 7 02:13:50.060010 systemd[1]: Created slice kubepods-burstable-pod55af8801_3665_4537_a222_72d6ad960f77.slice - libcontainer container kubepods-burstable-pod55af8801_3665_4537_a222_72d6ad960f77.slice. Mar 7 02:13:50.069107 kubelet[2503]: I0307 02:13:50.069063 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab-config\") pod \"goldmane-9f7667bb8-9fkdf\" (UID: \"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab\") " pod="calico-system/goldmane-9f7667bb8-9fkdf" Mar 7 02:13:50.069107 kubelet[2503]: I0307 02:13:50.069104 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/676a8c94-335a-4977-b910-64f7a6bc8f5e-tigera-ca-bundle\") pod \"calico-kube-controllers-67d48759d7-k2tjz\" (UID: \"676a8c94-335a-4977-b910-64f7a6bc8f5e\") " pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" Mar 7 02:13:50.069203 kubelet[2503]: I0307 02:13:50.069120 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-ca-bundle\") pod \"whisker-9d799687-rnx8g\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " pod="calico-system/whisker-9d799687-rnx8g" Mar 7 02:13:50.069203 kubelet[2503]: I0307 02:13:50.069132 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmstx\" (UniqueName: \"kubernetes.io/projected/9a48fdbb-1672-4738-9d0e-540e33f8a579-kube-api-access-vmstx\") pod \"whisker-9d799687-rnx8g\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " pod="calico-system/whisker-9d799687-rnx8g" Mar 7 02:13:50.069253 kubelet[2503]: I0307 02:13:50.069231 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcbd\" (UniqueName: \"kubernetes.io/projected/676a8c94-335a-4977-b910-64f7a6bc8f5e-kube-api-access-8fcbd\") pod \"calico-kube-controllers-67d48759d7-k2tjz\" (UID: \"676a8c94-335a-4977-b910-64f7a6bc8f5e\") " pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" Mar 7 02:13:50.069277 kubelet[2503]: I0307 02:13:50.069262 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-backend-key-pair\") pod \"whisker-9d799687-rnx8g\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " pod="calico-system/whisker-9d799687-rnx8g" Mar 7 02:13:50.069299 kubelet[2503]: I0307 02:13:50.069279 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm5g2\" (UniqueName: \"kubernetes.io/projected/49673b6d-eace-4732-9c08-550044a6a02f-kube-api-access-tm5g2\") pod \"coredns-7d764666f9-gc87b\" (UID: \"49673b6d-eace-4732-9c08-550044a6a02f\") " pod="kube-system/coredns-7d764666f9-gc87b" Mar 7 02:13:50.069322 kubelet[2503]: I0307 02:13:50.069300 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab-goldmane-key-pair\") pod \"goldmane-9f7667bb8-9fkdf\" (UID: \"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab\") " pod="calico-system/goldmane-9f7667bb8-9fkdf" Mar 7 02:13:50.069322 kubelet[2503]: I0307 02:13:50.069312 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/746832be-1a83-49f8-83ca-d151d465a357-calico-apiserver-certs\") pod \"calico-apiserver-67fc86959f-pwfm7\" (UID: \"746832be-1a83-49f8-83ca-d151d465a357\") " pod="calico-system/calico-apiserver-67fc86959f-pwfm7" Mar 7 02:13:50.069366 kubelet[2503]: I0307 02:13:50.069326 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgdm\" (UniqueName: \"kubernetes.io/projected/400859bd-9f1f-404b-b164-62fa2410895c-kube-api-access-2kgdm\") pod \"calico-apiserver-67fc86959f-bhm6g\" (UID: \"400859bd-9f1f-404b-b164-62fa2410895c\") " pod="calico-system/calico-apiserver-67fc86959f-bhm6g" Mar 7 02:13:50.069366 kubelet[2503]: I0307 02:13:50.069338 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-nginx-config\") pod \"whisker-9d799687-rnx8g\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " pod="calico-system/whisker-9d799687-rnx8g" Mar 7 02:13:50.069366 kubelet[2503]: I0307 02:13:50.069351 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdv22\" (UniqueName: \"kubernetes.io/projected/746832be-1a83-49f8-83ca-d151d465a357-kube-api-access-hdv22\") pod \"calico-apiserver-67fc86959f-pwfm7\" (UID: \"746832be-1a83-49f8-83ca-d151d465a357\") " pod="calico-system/calico-apiserver-67fc86959f-pwfm7" Mar 7 02:13:50.069366 kubelet[2503]: I0307 02:13:50.069364 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/400859bd-9f1f-404b-b164-62fa2410895c-calico-apiserver-certs\") pod \"calico-apiserver-67fc86959f-bhm6g\" (UID: \"400859bd-9f1f-404b-b164-62fa2410895c\") " pod="calico-system/calico-apiserver-67fc86959f-bhm6g" Mar 7 02:13:50.069476 kubelet[2503]: I0307 02:13:50.069377 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55af8801-3665-4537-a222-72d6ad960f77-config-volume\") pod \"coredns-7d764666f9-jd58h\" (UID: \"55af8801-3665-4537-a222-72d6ad960f77\") " pod="kube-system/coredns-7d764666f9-jd58h" Mar 7 02:13:50.069476 kubelet[2503]: I0307 02:13:50.069389 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2ct\" (UniqueName: \"kubernetes.io/projected/55af8801-3665-4537-a222-72d6ad960f77-kube-api-access-9s2ct\") pod \"coredns-7d764666f9-jd58h\" (UID: \"55af8801-3665-4537-a222-72d6ad960f77\") " pod="kube-system/coredns-7d764666f9-jd58h" Mar 7 02:13:50.069476 kubelet[2503]: I0307 02:13:50.069402 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49673b6d-eace-4732-9c08-550044a6a02f-config-volume\") pod \"coredns-7d764666f9-gc87b\" (UID: \"49673b6d-eace-4732-9c08-550044a6a02f\") " pod="kube-system/coredns-7d764666f9-gc87b" Mar 7 02:13:50.069476 kubelet[2503]: I0307 02:13:50.069447 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-9fkdf\" (UID: \"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab\") " pod="calico-system/goldmane-9f7667bb8-9fkdf" Mar 7 02:13:50.069476 kubelet[2503]: I0307 02:13:50.069460 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2zf\" (UniqueName: \"kubernetes.io/projected/0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab-kube-api-access-jk2zf\") pod \"goldmane-9f7667bb8-9fkdf\" (UID: \"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab\") " pod="calico-system/goldmane-9f7667bb8-9fkdf" Mar 7 02:13:50.326043 containerd[1454]: time="2026-03-07T02:13:50.325863249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d48759d7-k2tjz,Uid:676a8c94-335a-4977-b910-64f7a6bc8f5e,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:50.331894 kubelet[2503]: E0307 02:13:50.331813 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:50.332172 containerd[1454]: time="2026-03-07T02:13:50.332139177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gc87b,Uid:49673b6d-eace-4732-9c08-550044a6a02f,Namespace:kube-system,Attempt:0,}" Mar 7 02:13:50.337987 containerd[1454]: time="2026-03-07T02:13:50.337936253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9fkdf,Uid:0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:50.344666 containerd[1454]: time="2026-03-07T02:13:50.344482156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-bhm6g,Uid:400859bd-9f1f-404b-b164-62fa2410895c,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:50.353001 containerd[1454]: time="2026-03-07T02:13:50.352937884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9d799687-rnx8g,Uid:9a48fdbb-1672-4738-9d0e-540e33f8a579,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:50.365878 containerd[1454]: time="2026-03-07T02:13:50.365571982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-pwfm7,Uid:746832be-1a83-49f8-83ca-d151d465a357,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:50.366165 kubelet[2503]: E0307 02:13:50.366140 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:13:50.366916 containerd[1454]: time="2026-03-07T02:13:50.366762258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jd58h,Uid:55af8801-3665-4537-a222-72d6ad960f77,Namespace:kube-system,Attempt:0,}" Mar 7 02:13:50.473094 containerd[1454]: time="2026-03-07T02:13:50.473052885Z" level=error msg="Failed to destroy network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.473990 containerd[1454]: time="2026-03-07T02:13:50.473730290Z" level=error msg="encountered an error cleaning up failed sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.473990 containerd[1454]: time="2026-03-07T02:13:50.473774663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-bhm6g,Uid:400859bd-9f1f-404b-b164-62fa2410895c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.482148 kubelet[2503]: E0307 02:13:50.481790 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.482148 kubelet[2503]: E0307 02:13:50.481847 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67fc86959f-bhm6g" Mar 7 02:13:50.482148 kubelet[2503]: E0307 02:13:50.481864 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67fc86959f-bhm6g" Mar 7 02:13:50.482294 kubelet[2503]: E0307 02:13:50.481911 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fc86959f-bhm6g_calico-system(400859bd-9f1f-404b-b164-62fa2410895c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fc86959f-bhm6g_calico-system(400859bd-9f1f-404b-b164-62fa2410895c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67fc86959f-bhm6g" podUID="400859bd-9f1f-404b-b164-62fa2410895c" Mar 7 02:13:50.488676 containerd[1454]: time="2026-03-07T02:13:50.488609786Z" level=error msg="Failed to destroy network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.489491 containerd[1454]: time="2026-03-07T02:13:50.489389521Z" level=error msg="encountered an error cleaning up failed sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.489570 containerd[1454]: time="2026-03-07T02:13:50.489483548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d48759d7-k2tjz,Uid:676a8c94-335a-4977-b910-64f7a6bc8f5e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.489746 kubelet[2503]: E0307 02:13:50.489678 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.489746 kubelet[2503]: E0307 02:13:50.489713 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" Mar 7 02:13:50.489746 kubelet[2503]: E0307 02:13:50.489729 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" Mar 7 02:13:50.489831 kubelet[2503]: E0307 02:13:50.489765 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d48759d7-k2tjz_calico-system(676a8c94-335a-4977-b910-64f7a6bc8f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d48759d7-k2tjz_calico-system(676a8c94-335a-4977-b910-64f7a6bc8f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" podUID="676a8c94-335a-4977-b910-64f7a6bc8f5e" Mar 7 02:13:50.491906 containerd[1454]: time="2026-03-07T02:13:50.491881596Z" level=error msg="Failed to destroy network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.492403 containerd[1454]: time="2026-03-07T02:13:50.492378413Z" level=error msg="encountered an error cleaning up failed sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.492569 containerd[1454]: time="2026-03-07T02:13:50.492543882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gc87b,Uid:49673b6d-eace-4732-9c08-550044a6a02f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.492937 kubelet[2503]: E0307 02:13:50.492820 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.492937 kubelet[2503]: E0307 02:13:50.492916 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gc87b" Mar 7 02:13:50.492937 kubelet[2503]: E0307 02:13:50.492932 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gc87b" Mar 7 02:13:50.493058 kubelet[2503]: E0307 02:13:50.492970 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-gc87b_kube-system(49673b6d-eace-4732-9c08-550044a6a02f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-gc87b_kube-system(49673b6d-eace-4732-9c08-550044a6a02f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gc87b" podUID="49673b6d-eace-4732-9c08-550044a6a02f" Mar 7 02:13:50.504891 containerd[1454]: time="2026-03-07T02:13:50.504847339Z" level=error msg="Failed to destroy network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.505568 containerd[1454]: time="2026-03-07T02:13:50.505469971Z" level=error msg="encountered an error cleaning up failed sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.505648 containerd[1454]: time="2026-03-07T02:13:50.505624109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9d799687-rnx8g,Uid:9a48fdbb-1672-4738-9d0e-540e33f8a579,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.506130 kubelet[2503]: E0307 02:13:50.505854 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.506130 kubelet[2503]: E0307 02:13:50.505891 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9d799687-rnx8g" Mar 7 02:13:50.506130 kubelet[2503]: E0307 02:13:50.505908 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9d799687-rnx8g" Mar 7 02:13:50.507056 kubelet[2503]: E0307 02:13:50.505943 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9d799687-rnx8g_calico-system(9a48fdbb-1672-4738-9d0e-540e33f8a579)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9d799687-rnx8g_calico-system(9a48fdbb-1672-4738-9d0e-540e33f8a579)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9d799687-rnx8g" podUID="9a48fdbb-1672-4738-9d0e-540e33f8a579" Mar 7 02:13:50.526777 containerd[1454]: time="2026-03-07T02:13:50.526716844Z" level=error msg="Failed to destroy network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.527496 containerd[1454]: time="2026-03-07T02:13:50.527455363Z" level=error msg="encountered an error cleaning up failed sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.527837 containerd[1454]: time="2026-03-07T02:13:50.527557935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9fkdf,Uid:0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.528282 kubelet[2503]: E0307 02:13:50.528031 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.528282 kubelet[2503]: E0307 02:13:50.528069 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-9fkdf" Mar 7 02:13:50.528282 kubelet[2503]: E0307 02:13:50.528087 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-9fkdf" Mar 7 02:13:50.528379 kubelet[2503]: E0307 02:13:50.528124 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-9fkdf_calico-system(0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-9fkdf_calico-system(0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-9fkdf" podUID="0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab" Mar 7 02:13:50.540581 containerd[1454]: time="2026-03-07T02:13:50.540489950Z" level=error msg="Failed to destroy network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.541012 containerd[1454]: time="2026-03-07T02:13:50.540952683Z" level=error msg="encountered an error cleaning up failed sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.541044 containerd[1454]: time="2026-03-07T02:13:50.541025609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-pwfm7,Uid:746832be-1a83-49f8-83ca-d151d465a357,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.541297 kubelet[2503]: E0307 02:13:50.541257 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.541386 kubelet[2503]: E0307 02:13:50.541319 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67fc86959f-pwfm7" Mar 7 02:13:50.541386 kubelet[2503]: E0307 02:13:50.541339 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67fc86959f-pwfm7" Mar 7 02:13:50.541460 kubelet[2503]: E0307 02:13:50.541393 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fc86959f-pwfm7_calico-system(746832be-1a83-49f8-83ca-d151d465a357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fc86959f-pwfm7_calico-system(746832be-1a83-49f8-83ca-d151d465a357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67fc86959f-pwfm7" podUID="746832be-1a83-49f8-83ca-d151d465a357" Mar 7 02:13:50.544486 containerd[1454]: time="2026-03-07T02:13:50.544435561Z" level=error msg="Failed to destroy network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.544891 containerd[1454]: time="2026-03-07T02:13:50.544831409Z" level=error msg="encountered an error cleaning up failed sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.544933 containerd[1454]: time="2026-03-07T02:13:50.544891872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jd58h,Uid:55af8801-3665-4537-a222-72d6ad960f77,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.545120 kubelet[2503]: E0307 02:13:50.545083 2503 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.545205 kubelet[2503]: E0307 02:13:50.545130 2503 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-jd58h" Mar 7 02:13:50.545205 kubelet[2503]: E0307 02:13:50.545148 2503 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-jd58h" Mar 7 02:13:50.545205 kubelet[2503]: E0307 02:13:50.545187 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-jd58h_kube-system(55af8801-3665-4537-a222-72d6ad960f77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-jd58h_kube-system(55af8801-3665-4537-a222-72d6ad960f77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-jd58h" podUID="55af8801-3665-4537-a222-72d6ad960f77" Mar 7 02:13:50.797268 kubelet[2503]: I0307 02:13:50.797238 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:13:50.799110 kubelet[2503]: I0307 02:13:50.799080 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:13:50.799711 containerd[1454]: time="2026-03-07T02:13:50.799665389Z" level=info msg="StopPodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\"" Mar 7 02:13:50.800544 containerd[1454]: time="2026-03-07T02:13:50.800303911Z" level=info msg="StopPodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\"" Mar 7 02:13:50.801091 containerd[1454]: time="2026-03-07T02:13:50.801072545Z" level=info msg="Ensure that sandbox 1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439 in task-service has been cleanup successfully" Mar 7 02:13:50.801257 kubelet[2503]: I0307 02:13:50.801227 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:13:50.801773 containerd[1454]: time="2026-03-07T02:13:50.801641247Z" level=info msg="StopPodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\"" Mar 7 02:13:50.801773 containerd[1454]: time="2026-03-07T02:13:50.801753316Z" level=info msg="Ensure that sandbox 1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a in task-service has been cleanup successfully" Mar 7 02:13:50.802285 containerd[1454]: time="2026-03-07T02:13:50.802118037Z" level=info msg="Ensure that sandbox a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e in task-service has been cleanup successfully" Mar 7 02:13:50.804916 kubelet[2503]: I0307 02:13:50.804868 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:13:50.805811 containerd[1454]: time="2026-03-07T02:13:50.805729799Z" level=info msg="StopPodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\"" Mar 7 02:13:50.805858 containerd[1454]: time="2026-03-07T02:13:50.805841247Z" level=info msg="Ensure that sandbox caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e in task-service has been cleanup successfully" Mar 7 02:13:50.835560 kubelet[2503]: I0307 02:13:50.833565 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:13:50.835811 containerd[1454]: time="2026-03-07T02:13:50.834372523Z" level=info msg="StopPodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\"" Mar 7 02:13:50.835811 containerd[1454]: time="2026-03-07T02:13:50.834579409Z" level=info msg="Ensure that sandbox 454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e in task-service has been cleanup successfully" Mar 7 02:13:50.838591 kubelet[2503]: I0307 02:13:50.838008 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:13:50.841664 kubelet[2503]: I0307 02:13:50.841310 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:13:50.842204 containerd[1454]: time="2026-03-07T02:13:50.840254542Z" level=info msg="StopPodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\"" Mar 7 02:13:50.842685 containerd[1454]: time="2026-03-07T02:13:50.842439931Z" level=info msg="Ensure that sandbox 411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58 in task-service has been cleanup successfully" Mar 7 02:13:50.842878 containerd[1454]: time="2026-03-07T02:13:50.842153858Z" level=info msg="StopPodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\"" Mar 7 02:13:50.843038 containerd[1454]: time="2026-03-07T02:13:50.843020064Z" level=info msg="Ensure that sandbox 01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd in task-service has been cleanup successfully" Mar 7 02:13:50.851219 containerd[1454]: time="2026-03-07T02:13:50.851166485Z" level=error msg="StopPodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" failed" error="failed to destroy network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.851460 kubelet[2503]: E0307 02:13:50.851388 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:13:50.851539 kubelet[2503]: E0307 02:13:50.851469 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439"} Mar 7 02:13:50.851572 kubelet[2503]: E0307 02:13:50.851554 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.851644 kubelet[2503]: E0307 02:13:50.851575 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-9fkdf" podUID="0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab" Mar 7 02:13:50.852741 containerd[1454]: time="2026-03-07T02:13:50.852688587Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 02:13:50.859433 containerd[1454]: time="2026-03-07T02:13:50.859384199Z" level=error msg="StopPodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" failed" error="failed to destroy network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.859696 kubelet[2503]: E0307 02:13:50.859673 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:13:50.860073 kubelet[2503]: E0307 02:13:50.860058 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a"} Mar 7 02:13:50.860282 kubelet[2503]: E0307 02:13:50.860224 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"676a8c94-335a-4977-b910-64f7a6bc8f5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.860282 kubelet[2503]: E0307 02:13:50.860254 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"676a8c94-335a-4977-b910-64f7a6bc8f5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" podUID="676a8c94-335a-4977-b910-64f7a6bc8f5e" Mar 7 02:13:50.883259 containerd[1454]: time="2026-03-07T02:13:50.883022322Z" level=info msg="CreateContainer within sandbox \"f9c1f5a40e7a641fd623db2d6169f216f62b1098081685d3f1b63988db1df961\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e6956185b7ba2b53cee20ac2ef04106102857c48fd7ff7360bd03579aaa6c2d6\"" Mar 7 02:13:50.883887 containerd[1454]: time="2026-03-07T02:13:50.883610300Z" level=error msg="StopPodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" failed" error="failed to destroy network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.883942 kubelet[2503]: E0307 02:13:50.883760 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:13:50.883942 kubelet[2503]: E0307 02:13:50.883787 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e"} Mar 7 02:13:50.883942 kubelet[2503]: E0307 02:13:50.883813 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49673b6d-eace-4732-9c08-550044a6a02f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.883942 kubelet[2503]: E0307 02:13:50.883832 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49673b6d-eace-4732-9c08-550044a6a02f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gc87b" podUID="49673b6d-eace-4732-9c08-550044a6a02f" Mar 7 02:13:50.885468 containerd[1454]: time="2026-03-07T02:13:50.885393458Z" level=info msg="StartContainer for \"e6956185b7ba2b53cee20ac2ef04106102857c48fd7ff7360bd03579aaa6c2d6\"" Mar 7 02:13:50.888933 containerd[1454]: time="2026-03-07T02:13:50.888737119Z" level=error msg="StopPodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" failed" error="failed to destroy network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.888975 kubelet[2503]: E0307 02:13:50.888835 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:13:50.888975 kubelet[2503]: E0307 02:13:50.888861 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e"} Mar 7 02:13:50.888975 kubelet[2503]: E0307 02:13:50.888880 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"746832be-1a83-49f8-83ca-d151d465a357\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.888975 kubelet[2503]: E0307 02:13:50.888900 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"746832be-1a83-49f8-83ca-d151d465a357\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67fc86959f-pwfm7" podUID="746832be-1a83-49f8-83ca-d151d465a357" Mar 7 02:13:50.895823 containerd[1454]: time="2026-03-07T02:13:50.895795344Z" level=error msg="StopPodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" failed" error="failed to destroy network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.896110 kubelet[2503]: E0307 02:13:50.896007 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:13:50.896110 kubelet[2503]: E0307 02:13:50.896035 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58"} Mar 7 02:13:50.896110 kubelet[2503]: E0307 02:13:50.896056 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a48fdbb-1672-4738-9d0e-540e33f8a579\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.896110 kubelet[2503]: E0307 02:13:50.896076 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a48fdbb-1672-4738-9d0e-540e33f8a579\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9d799687-rnx8g" podUID="9a48fdbb-1672-4738-9d0e-540e33f8a579" Mar 7 02:13:50.901562 containerd[1454]: time="2026-03-07T02:13:50.901472025Z" level=error msg="StopPodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" failed" error="failed to destroy network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.901772 kubelet[2503]: E0307 02:13:50.901697 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:13:50.901772 kubelet[2503]: E0307 02:13:50.901740 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e"} Mar 7 02:13:50.901772 kubelet[2503]: E0307 02:13:50.901761 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55af8801-3665-4537-a222-72d6ad960f77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.901896 kubelet[2503]: E0307 02:13:50.901778 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55af8801-3665-4537-a222-72d6ad960f77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-jd58h" podUID="55af8801-3665-4537-a222-72d6ad960f77" Mar 7 02:13:50.909239 containerd[1454]: time="2026-03-07T02:13:50.909116309Z" level=error msg="StopPodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" failed" error="failed to destroy network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:13:50.909449 kubelet[2503]: E0307 02:13:50.909403 2503 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:13:50.909489 kubelet[2503]: E0307 02:13:50.909451 2503 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd"} Mar 7 02:13:50.909489 kubelet[2503]: E0307 02:13:50.909471 2503 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"400859bd-9f1f-404b-b164-62fa2410895c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:13:50.909665 kubelet[2503]: E0307 02:13:50.909492 2503 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"400859bd-9f1f-404b-b164-62fa2410895c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67fc86959f-bhm6g" podUID="400859bd-9f1f-404b-b164-62fa2410895c" Mar 7 02:13:50.925686 systemd[1]: Started cri-containerd-e6956185b7ba2b53cee20ac2ef04106102857c48fd7ff7360bd03579aaa6c2d6.scope - libcontainer container e6956185b7ba2b53cee20ac2ef04106102857c48fd7ff7360bd03579aaa6c2d6. Mar 7 02:13:50.957066 containerd[1454]: time="2026-03-07T02:13:50.957031769Z" level=info msg="StartContainer for \"e6956185b7ba2b53cee20ac2ef04106102857c48fd7ff7360bd03579aaa6c2d6\" returns successfully" Mar 7 02:13:51.307339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439-shm.mount: Deactivated successfully. Mar 7 02:13:51.307757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd-shm.mount: Deactivated successfully. Mar 7 02:13:51.307908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e-shm.mount: Deactivated successfully. Mar 7 02:13:51.308040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a-shm.mount: Deactivated successfully. Mar 7 02:13:51.697215 systemd[1]: Created slice kubepods-besteffort-pod4a09e6a6_ad07_4660_ad76_3cd9ebaad755.slice - libcontainer container kubepods-besteffort-pod4a09e6a6_ad07_4660_ad76_3cd9ebaad755.slice. Mar 7 02:13:51.702058 containerd[1454]: time="2026-03-07T02:13:51.702003012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2tvt,Uid:4a09e6a6-ad07-4660-ad76-3cd9ebaad755,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:51.836912 systemd-networkd[1381]: calicf43698092a: Link UP Mar 7 02:13:51.837293 systemd-networkd[1381]: calicf43698092a: Gained carrier Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.739 [ERROR][3735] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.759 [INFO][3735] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v2tvt-eth0 csi-node-driver- calico-system 4a09e6a6-ad07-4660-ad76-3cd9ebaad755 708 0 2026-03-07 02:13:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v2tvt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicf43698092a [] [] }} ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.759 [INFO][3735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.787 [INFO][3749] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" HandleID="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Workload="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.793 [INFO][3749] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" HandleID="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Workload="localhost-k8s-csi--node--driver--v2tvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001399a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v2tvt", "timestamp":"2026-03-07 02:13:51.787119082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002aedc0)} Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.793 [INFO][3749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.793 [INFO][3749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.793 [INFO][3749] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.795 [INFO][3749] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.799 [INFO][3749] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.804 [INFO][3749] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.806 [INFO][3749] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.808 [INFO][3749] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.808 [INFO][3749] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.809 [INFO][3749] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14 Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.813 [INFO][3749] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.817 [INFO][3749] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.817 [INFO][3749] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" host="localhost" Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.817 [INFO][3749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:13:51.859255 containerd[1454]: 2026-03-07 02:13:51.817 [INFO][3749] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" HandleID="k8s-pod-network.46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Workload="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.860024 containerd[1454]: 2026-03-07 02:13:51.822 [INFO][3735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v2tvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a09e6a6-ad07-4660-ad76-3cd9ebaad755", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v2tvt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf43698092a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:13:51.860024 containerd[1454]: 2026-03-07 02:13:51.822 [INFO][3735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.860024 containerd[1454]: 2026-03-07 02:13:51.822 [INFO][3735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf43698092a ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.860024 containerd[1454]: 2026-03-07 02:13:51.837 [INFO][3735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.860024 containerd[1454]: 2026-03-07 02:13:51.838 [INFO][3735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v2tvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a09e6a6-ad07-4660-ad76-3cd9ebaad755", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14", Pod:"csi-node-driver-v2tvt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf43698092a", MAC:"46:16:0e:be:59:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:13:51.860024 containerd[1454]: 2026-03-07 02:13:51.849 [INFO][3735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14" Namespace="calico-system" Pod="csi-node-driver-v2tvt" WorkloadEndpoint="localhost-k8s-csi--node--driver--v2tvt-eth0" Mar 7 02:13:51.862646 containerd[1454]: time="2026-03-07T02:13:51.862586435Z" level=info msg="StopPodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\"" Mar 7 02:13:51.877538 kubelet[2503]: I0307 02:13:51.877202 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-s4qqx" podStartSLOduration=1.8845779390000001 podStartE2EDuration="14.877190496s" podCreationTimestamp="2026-03-07 02:13:37 +0000 UTC" firstStartedPulling="2026-03-07 02:13:37.834641209 +0000 UTC m=+16.294934195" lastFinishedPulling="2026-03-07 02:13:50.827253764 +0000 UTC m=+29.287546752" observedRunningTime="2026-03-07 02:13:51.875234925 +0000 UTC m=+30.335527912" watchObservedRunningTime="2026-03-07 02:13:51.877190496 +0000 UTC m=+30.337483483" Mar 7 02:13:51.901351 systemd[1]: run-containerd-runc-k8s.io-e6956185b7ba2b53cee20ac2ef04106102857c48fd7ff7360bd03579aaa6c2d6-runc.d9ZfjA.mount: Deactivated successfully. Mar 7 02:13:51.901701 containerd[1454]: time="2026-03-07T02:13:51.901328847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:51.901701 containerd[1454]: time="2026-03-07T02:13:51.901676015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:51.905051 containerd[1454]: time="2026-03-07T02:13:51.904809715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:51.905051 containerd[1454]: time="2026-03-07T02:13:51.904892540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:51.937667 systemd[1]: Started cri-containerd-46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14.scope - libcontainer container 46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14. Mar 7 02:13:51.958131 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:13:51.975249 containerd[1454]: time="2026-03-07T02:13:51.975140014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2tvt,Uid:4a09e6a6-ad07-4660-ad76-3cd9ebaad755,Namespace:calico-system,Attempt:0,} returns sandbox id \"46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14\"" Mar 7 02:13:51.980278 containerd[1454]: time="2026-03-07T02:13:51.980086607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.943 [INFO][3781] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.943 [INFO][3781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" iface="eth0" netns="/var/run/netns/cni-b868cdcb-b182-bf85-d352-416f0f2da9b9" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.943 [INFO][3781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" iface="eth0" netns="/var/run/netns/cni-b868cdcb-b182-bf85-d352-416f0f2da9b9" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.944 [INFO][3781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" iface="eth0" netns="/var/run/netns/cni-b868cdcb-b182-bf85-d352-416f0f2da9b9" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.944 [INFO][3781] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.944 [INFO][3781] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.974 [INFO][3844] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.974 [INFO][3844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.974 [INFO][3844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.983 [WARNING][3844] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.983 [INFO][3844] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.986 [INFO][3844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:13:51.991472 containerd[1454]: 2026-03-07 02:13:51.989 [INFO][3781] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:13:51.992119 containerd[1454]: time="2026-03-07T02:13:51.992089198Z" level=info msg="TearDown network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" successfully" Mar 7 02:13:51.992119 containerd[1454]: time="2026-03-07T02:13:51.992117029Z" level=info msg="StopPodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" returns successfully" Mar 7 02:13:52.085694 kubelet[2503]: I0307 02:13:52.085368 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-backend-key-pair\") pod \"9a48fdbb-1672-4738-9d0e-540e33f8a579\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " Mar 7 02:13:52.085694 kubelet[2503]: I0307 02:13:52.085653 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-nginx-config\" (UniqueName: \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-nginx-config\") pod \"9a48fdbb-1672-4738-9d0e-540e33f8a579\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " Mar 7 02:13:52.085851 kubelet[2503]: I0307 02:13:52.085739 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-ca-bundle\") pod \"9a48fdbb-1672-4738-9d0e-540e33f8a579\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " Mar 7 02:13:52.085851 kubelet[2503]: I0307 02:13:52.085780 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9a48fdbb-1672-4738-9d0e-540e33f8a579-kube-api-access-vmstx\" (UniqueName: \"kubernetes.io/projected/9a48fdbb-1672-4738-9d0e-540e33f8a579-kube-api-access-vmstx\") pod \"9a48fdbb-1672-4738-9d0e-540e33f8a579\" (UID: \"9a48fdbb-1672-4738-9d0e-540e33f8a579\") " Mar 7 02:13:52.086330 kubelet[2503]: I0307 02:13:52.085890 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-nginx-config" pod "9a48fdbb-1672-4738-9d0e-540e33f8a579" (UID: "9a48fdbb-1672-4738-9d0e-540e33f8a579"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:13:52.086330 kubelet[2503]: I0307 02:13:52.086305 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-ca-bundle" pod "9a48fdbb-1672-4738-9d0e-540e33f8a579" (UID: "9a48fdbb-1672-4738-9d0e-540e33f8a579"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:13:52.089596 kubelet[2503]: I0307 02:13:52.089555 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-backend-key-pair" pod "9a48fdbb-1672-4738-9d0e-540e33f8a579" (UID: "9a48fdbb-1672-4738-9d0e-540e33f8a579"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 02:13:52.090725 kubelet[2503]: I0307 02:13:52.090642 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a48fdbb-1672-4738-9d0e-540e33f8a579-kube-api-access-vmstx" pod "9a48fdbb-1672-4738-9d0e-540e33f8a579" (UID: "9a48fdbb-1672-4738-9d0e-540e33f8a579"). InnerVolumeSpecName "kube-api-access-vmstx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:13:52.186542 kubelet[2503]: I0307 02:13:52.186458 2503 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 7 02:13:52.186542 kubelet[2503]: I0307 02:13:52.186547 2503 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 7 02:13:52.186704 kubelet[2503]: I0307 02:13:52.186561 2503 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmstx\" (UniqueName: \"kubernetes.io/projected/9a48fdbb-1672-4738-9d0e-540e33f8a579-kube-api-access-vmstx\") on node \"localhost\" DevicePath \"\"" Mar 7 02:13:52.186704 kubelet[2503]: I0307 02:13:52.186571 2503 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a48fdbb-1672-4738-9d0e-540e33f8a579-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 7 02:13:52.304597 systemd[1]: run-netns-cni\x2db868cdcb\x2db182\x2dbf85\x2dd352\x2d416f0f2da9b9.mount: Deactivated successfully. Mar 7 02:13:52.304717 systemd[1]: var-lib-kubelet-pods-9a48fdbb\x2d1672\x2d4738\x2d9d0e\x2d540e33f8a579-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvmstx.mount: Deactivated successfully. Mar 7 02:13:52.304798 systemd[1]: var-lib-kubelet-pods-9a48fdbb\x2d1672\x2d4738\x2d9d0e\x2d540e33f8a579-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 02:13:52.574917 containerd[1454]: time="2026-03-07T02:13:52.574787825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:52.576494 containerd[1454]: time="2026-03-07T02:13:52.576258701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 02:13:52.578351 containerd[1454]: time="2026-03-07T02:13:52.578281896Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:52.583166 containerd[1454]: time="2026-03-07T02:13:52.582204439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:52.583166 containerd[1454]: time="2026-03-07T02:13:52.582970339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 602.856401ms" Mar 7 02:13:52.583166 containerd[1454]: time="2026-03-07T02:13:52.582994094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 02:13:52.590346 containerd[1454]: time="2026-03-07T02:13:52.590290942Z" level=info msg="CreateContainer within sandbox \"46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 02:13:52.616167 containerd[1454]: time="2026-03-07T02:13:52.616107783Z" level=info msg="CreateContainer within sandbox \"46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c176883af075563ee4ddb93c77b13a1906031d4c07c292e6b03d4b93740d0aa5\"" Mar 7 02:13:52.616810 containerd[1454]: time="2026-03-07T02:13:52.616749542Z" level=info msg="StartContainer for \"c176883af075563ee4ddb93c77b13a1906031d4c07c292e6b03d4b93740d0aa5\"" Mar 7 02:13:52.626548 kernel: calico-node[3967]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 02:13:52.673703 systemd[1]: Started cri-containerd-c176883af075563ee4ddb93c77b13a1906031d4c07c292e6b03d4b93740d0aa5.scope - libcontainer container c176883af075563ee4ddb93c77b13a1906031d4c07c292e6b03d4b93740d0aa5. Mar 7 02:13:52.744587 containerd[1454]: time="2026-03-07T02:13:52.742608846Z" level=info msg="StartContainer for \"c176883af075563ee4ddb93c77b13a1906031d4c07c292e6b03d4b93740d0aa5\" returns successfully" Mar 7 02:13:52.748769 containerd[1454]: time="2026-03-07T02:13:52.747742071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 02:13:52.874615 systemd[1]: Removed slice kubepods-besteffort-pod9a48fdbb_1672_4738_9d0e_540e33f8a579.slice - libcontainer container kubepods-besteffort-pod9a48fdbb_1672_4738_9d0e_540e33f8a579.slice. Mar 7 02:13:52.934918 systemd[1]: Created slice kubepods-besteffort-pod17e78583_d977_488f_a43b_2f148172e81f.slice - libcontainer container kubepods-besteffort-pod17e78583_d977_488f_a43b_2f148172e81f.slice. Mar 7 02:13:52.992692 kubelet[2503]: I0307 02:13:52.992629 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/17e78583-d977-488f-a43b-2f148172e81f-nginx-config\") pod \"whisker-5dc459764-qdqrd\" (UID: \"17e78583-d977-488f-a43b-2f148172e81f\") " pod="calico-system/whisker-5dc459764-qdqrd" Mar 7 02:13:52.992692 kubelet[2503]: I0307 02:13:52.992681 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2zd\" (UniqueName: \"kubernetes.io/projected/17e78583-d977-488f-a43b-2f148172e81f-kube-api-access-zj2zd\") pod \"whisker-5dc459764-qdqrd\" (UID: \"17e78583-d977-488f-a43b-2f148172e81f\") " pod="calico-system/whisker-5dc459764-qdqrd" Mar 7 02:13:52.993121 kubelet[2503]: I0307 02:13:52.992704 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17e78583-d977-488f-a43b-2f148172e81f-whisker-ca-bundle\") pod \"whisker-5dc459764-qdqrd\" (UID: \"17e78583-d977-488f-a43b-2f148172e81f\") " pod="calico-system/whisker-5dc459764-qdqrd" Mar 7 02:13:52.993121 kubelet[2503]: I0307 02:13:52.992721 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17e78583-d977-488f-a43b-2f148172e81f-whisker-backend-key-pair\") pod \"whisker-5dc459764-qdqrd\" (UID: \"17e78583-d977-488f-a43b-2f148172e81f\") " pod="calico-system/whisker-5dc459764-qdqrd" Mar 7 02:13:53.231477 systemd-networkd[1381]: vxlan.calico: Link UP Mar 7 02:13:53.231487 systemd-networkd[1381]: vxlan.calico: Gained carrier Mar 7 02:13:53.242962 containerd[1454]: time="2026-03-07T02:13:53.242912665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc459764-qdqrd,Uid:17e78583-d977-488f-a43b-2f148172e81f,Namespace:calico-system,Attempt:0,}" Mar 7 02:13:53.308465 systemd[1]: run-containerd-runc-k8s.io-c176883af075563ee4ddb93c77b13a1906031d4c07c292e6b03d4b93740d0aa5-runc.UW9is7.mount: Deactivated successfully. Mar 7 02:13:53.395927 systemd-networkd[1381]: cali950289066d2: Link UP Mar 7 02:13:53.396987 systemd-networkd[1381]: cali950289066d2: Gained carrier Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.314 [INFO][4104] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5dc459764--qdqrd-eth0 whisker-5dc459764- calico-system 17e78583-d977-488f-a43b-2f148172e81f 924 0 2026-03-07 02:13:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dc459764 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5dc459764-qdqrd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali950289066d2 [] [] }} ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.314 [INFO][4104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.345 [INFO][4124] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" HandleID="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Workload="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.351 [INFO][4124] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" HandleID="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Workload="localhost-k8s-whisker--5dc459764--qdqrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5dc459764-qdqrd", "timestamp":"2026-03-07 02:13:53.345063876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000548f20)} Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.351 [INFO][4124] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.351 [INFO][4124] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.351 [INFO][4124] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.354 [INFO][4124] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.358 [INFO][4124] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.363 [INFO][4124] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.366 [INFO][4124] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.368 [INFO][4124] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.368 [INFO][4124] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.373 [INFO][4124] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6 Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.383 [INFO][4124] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.388 [INFO][4124] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.388 [INFO][4124] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" host="localhost" Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.389 [INFO][4124] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:13:53.420722 containerd[1454]: 2026-03-07 02:13:53.389 [INFO][4124] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" HandleID="k8s-pod-network.992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Workload="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.421400 containerd[1454]: 2026-03-07 02:13:53.393 [INFO][4104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dc459764--qdqrd-eth0", GenerateName:"whisker-5dc459764-", Namespace:"calico-system", SelfLink:"", UID:"17e78583-d977-488f-a43b-2f148172e81f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dc459764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5dc459764-qdqrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali950289066d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:13:53.421400 containerd[1454]: 2026-03-07 02:13:53.393 [INFO][4104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.421400 containerd[1454]: 2026-03-07 02:13:53.393 [INFO][4104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali950289066d2 ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.421400 containerd[1454]: 2026-03-07 02:13:53.396 [INFO][4104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.421400 containerd[1454]: 2026-03-07 02:13:53.397 [INFO][4104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dc459764--qdqrd-eth0", GenerateName:"whisker-5dc459764-", Namespace:"calico-system", SelfLink:"", UID:"17e78583-d977-488f-a43b-2f148172e81f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dc459764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6", Pod:"whisker-5dc459764-qdqrd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali950289066d2", MAC:"a2:91:51:41:18:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:13:53.421400 containerd[1454]: 2026-03-07 02:13:53.413 [INFO][4104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6" Namespace="calico-system" Pod="whisker-5dc459764-qdqrd" WorkloadEndpoint="localhost-k8s-whisker--5dc459764--qdqrd-eth0" Mar 7 02:13:53.450722 containerd[1454]: time="2026-03-07T02:13:53.450476797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:13:53.450722 containerd[1454]: time="2026-03-07T02:13:53.450684315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:13:53.450722 containerd[1454]: time="2026-03-07T02:13:53.450698511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:53.452006 containerd[1454]: time="2026-03-07T02:13:53.450784491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:13:53.480775 systemd[1]: Started cri-containerd-992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6.scope - libcontainer container 992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6. Mar 7 02:13:53.487319 containerd[1454]: time="2026-03-07T02:13:53.487204388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:53.489214 containerd[1454]: time="2026-03-07T02:13:53.489022011Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:53.489214 containerd[1454]: time="2026-03-07T02:13:53.489084668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 02:13:53.493182 containerd[1454]: time="2026-03-07T02:13:53.493111144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:53.493968 containerd[1454]: time="2026-03-07T02:13:53.493910456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 746.114665ms" Mar 7 02:13:53.493968 containerd[1454]: time="2026-03-07T02:13:53.493955680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 02:13:53.500174 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:13:53.503576 containerd[1454]: time="2026-03-07T02:13:53.503539541Z" level=info msg="CreateContainer within sandbox \"46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 02:13:53.530216 containerd[1454]: time="2026-03-07T02:13:53.530135696Z" level=info msg="CreateContainer within sandbox \"46b6c54a3efb09dd268f3a1e04c69a96ef91959d6990fb82857da040a5ae0f14\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2fa6d2f4a0b33af8c617db521cb1cc240e275b8d017a4df9f0c61827a428118a\"" Mar 7 02:13:53.535700 containerd[1454]: time="2026-03-07T02:13:53.533684550Z" level=info msg="StartContainer for \"2fa6d2f4a0b33af8c617db521cb1cc240e275b8d017a4df9f0c61827a428118a\"" Mar 7 02:13:53.553269 containerd[1454]: time="2026-03-07T02:13:53.553220288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc459764-qdqrd,Uid:17e78583-d977-488f-a43b-2f148172e81f,Namespace:calico-system,Attempt:0,} returns sandbox id \"992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6\"" Mar 7 02:13:53.556912 containerd[1454]: time="2026-03-07T02:13:53.555572488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 02:13:53.591668 systemd[1]: Started cri-containerd-2fa6d2f4a0b33af8c617db521cb1cc240e275b8d017a4df9f0c61827a428118a.scope - libcontainer container 2fa6d2f4a0b33af8c617db521cb1cc240e275b8d017a4df9f0c61827a428118a. Mar 7 02:13:53.611747 systemd-networkd[1381]: calicf43698092a: Gained IPv6LL Mar 7 02:13:53.651115 containerd[1454]: time="2026-03-07T02:13:53.651071157Z" level=info msg="StartContainer for \"2fa6d2f4a0b33af8c617db521cb1cc240e275b8d017a4df9f0c61827a428118a\" returns successfully" Mar 7 02:13:53.692914 kubelet[2503]: I0307 02:13:53.692886 2503 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9a48fdbb-1672-4738-9d0e-540e33f8a579" path="/var/lib/kubelet/pods/9a48fdbb-1672-4738-9d0e-540e33f8a579/volumes" Mar 7 02:13:53.768133 kubelet[2503]: I0307 02:13:53.768014 2503 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 02:13:53.768133 kubelet[2503]: I0307 02:13:53.768050 2503 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 02:13:54.044215 containerd[1454]: time="2026-03-07T02:13:54.044047581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:54.045102 containerd[1454]: time="2026-03-07T02:13:54.045043269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 02:13:54.046469 containerd[1454]: time="2026-03-07T02:13:54.046375904Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:54.051781 containerd[1454]: time="2026-03-07T02:13:54.051712967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:54.052683 containerd[1454]: time="2026-03-07T02:13:54.052610532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 497.014511ms" Mar 7 02:13:54.052683 containerd[1454]: time="2026-03-07T02:13:54.052652421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 02:13:54.058258 containerd[1454]: time="2026-03-07T02:13:54.058182145Z" level=info msg="CreateContainer within sandbox \"992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 02:13:54.071803 containerd[1454]: time="2026-03-07T02:13:54.071746068Z" level=info msg="CreateContainer within sandbox \"992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"edf894469bbc1ae23bb4c128f94d170a0a0aef8fa61abff0a8ec0125df0d31ef\"" Mar 7 02:13:54.073214 containerd[1454]: time="2026-03-07T02:13:54.072219050Z" level=info msg="StartContainer for \"edf894469bbc1ae23bb4c128f94d170a0a0aef8fa61abff0a8ec0125df0d31ef\"" Mar 7 02:13:54.103679 systemd[1]: Started cri-containerd-edf894469bbc1ae23bb4c128f94d170a0a0aef8fa61abff0a8ec0125df0d31ef.scope - libcontainer container edf894469bbc1ae23bb4c128f94d170a0a0aef8fa61abff0a8ec0125df0d31ef. Mar 7 02:13:54.144922 containerd[1454]: time="2026-03-07T02:13:54.144830997Z" level=info msg="StartContainer for \"edf894469bbc1ae23bb4c128f94d170a0a0aef8fa61abff0a8ec0125df0d31ef\" returns successfully" Mar 7 02:13:54.146645 containerd[1454]: time="2026-03-07T02:13:54.146596193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 02:13:54.636622 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Mar 7 02:13:54.793936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890246419.mount: Deactivated successfully. Mar 7 02:13:54.822088 containerd[1454]: time="2026-03-07T02:13:54.822037636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:54.823258 containerd[1454]: time="2026-03-07T02:13:54.823218631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 02:13:54.824703 containerd[1454]: time="2026-03-07T02:13:54.824661203Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:54.827171 containerd[1454]: time="2026-03-07T02:13:54.827138238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:13:54.828005 containerd[1454]: time="2026-03-07T02:13:54.827973573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 681.336363ms" Mar 7 02:13:54.828043 containerd[1454]: time="2026-03-07T02:13:54.828008819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 02:13:54.835248 containerd[1454]: time="2026-03-07T02:13:54.835196369Z" level=info msg="CreateContainer within sandbox \"992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 02:13:54.850025 containerd[1454]: time="2026-03-07T02:13:54.849960760Z" level=info msg="CreateContainer within sandbox \"992c65367130b406a67fd73a83b2d47f8066a1dd55aeae4cd390221946940dd6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b51716ffa9d2e5f03a804e50651caa090532a7dddf39269a9ec3b6649a9f967e\"" Mar 7 02:13:54.850660 containerd[1454]: time="2026-03-07T02:13:54.850602597Z" level=info msg="StartContainer for \"b51716ffa9d2e5f03a804e50651caa090532a7dddf39269a9ec3b6649a9f967e\"" Mar 7 02:13:54.883644 systemd[1]: Started cri-containerd-b51716ffa9d2e5f03a804e50651caa090532a7dddf39269a9ec3b6649a9f967e.scope - libcontainer container b51716ffa9d2e5f03a804e50651caa090532a7dddf39269a9ec3b6649a9f967e. Mar 7 02:13:54.925765 containerd[1454]: time="2026-03-07T02:13:54.925669623Z" level=info msg="StartContainer for \"b51716ffa9d2e5f03a804e50651caa090532a7dddf39269a9ec3b6649a9f967e\" returns successfully" Mar 7 02:13:55.148017 systemd-networkd[1381]: cali950289066d2: Gained IPv6LL Mar 7 02:13:55.900396 kubelet[2503]: I0307 02:13:55.900303 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-v2tvt" podStartSLOduration=17.383873454 podStartE2EDuration="18.900292516s" podCreationTimestamp="2026-03-07 02:13:37 +0000 UTC" firstStartedPulling="2026-03-07 02:13:51.978582656 +0000 UTC m=+30.438875643" lastFinishedPulling="2026-03-07 02:13:53.495001718 +0000 UTC m=+31.955294705" observedRunningTime="2026-03-07 02:13:53.890328789 +0000 UTC m=+32.350621786" watchObservedRunningTime="2026-03-07 02:13:55.900292516 +0000 UTC m=+34.360585503" Mar 7 02:13:55.901368 kubelet[2503]: I0307 02:13:55.900628 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5dc459764-qdqrd" podStartSLOduration=2.6242194530000003 podStartE2EDuration="3.90062132s" podCreationTimestamp="2026-03-07 02:13:52 +0000 UTC" firstStartedPulling="2026-03-07 02:13:53.555135361 +0000 UTC m=+32.015428359" lastFinishedPulling="2026-03-07 02:13:54.831537239 +0000 UTC m=+33.291830226" observedRunningTime="2026-03-07 02:13:55.900001702 +0000 UTC m=+34.360294690" watchObservedRunningTime="2026-03-07 02:13:55.90062132 +0000 UTC m=+34.360914306" Mar 7 02:13:57.746687 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:44400.service - OpenSSH per-connection server daemon (10.0.0.1:44400). Mar 7 02:13:57.801868 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 44400 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:13:57.803393 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:13:57.807910 systemd-logind[1442]: New session 8 of user core. Mar 7 02:13:57.816771 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 02:13:57.942103 sshd[4400]: pam_unix(sshd:session): session closed for user core Mar 7 02:13:57.945900 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:44400.service: Deactivated successfully. Mar 7 02:13:57.947864 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 02:13:57.948727 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 7 02:13:57.949803 systemd-logind[1442]: Removed session 8. Mar 7 02:14:02.691155 containerd[1454]: time="2026-03-07T02:14:02.691095964Z" level=info msg="StopPodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\"" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.734 [INFO][4449] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.735 [INFO][4449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" iface="eth0" netns="/var/run/netns/cni-b6cf2c03-70e8-674c-1270-a209b954e77e" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.735 [INFO][4449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" iface="eth0" netns="/var/run/netns/cni-b6cf2c03-70e8-674c-1270-a209b954e77e" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.735 [INFO][4449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" iface="eth0" netns="/var/run/netns/cni-b6cf2c03-70e8-674c-1270-a209b954e77e" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.735 [INFO][4449] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.735 [INFO][4449] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.757 [INFO][4457] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.757 [INFO][4457] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.757 [INFO][4457] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.764 [WARNING][4457] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.764 [INFO][4457] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.765 [INFO][4457] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:02.770748 containerd[1454]: 2026-03-07 02:14:02.768 [INFO][4449] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:02.771244 containerd[1454]: time="2026-03-07T02:14:02.771200520Z" level=info msg="TearDown network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" successfully" Mar 7 02:14:02.771271 containerd[1454]: time="2026-03-07T02:14:02.771249261Z" level=info msg="StopPodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" returns successfully" Mar 7 02:14:02.775464 systemd[1]: run-netns-cni\x2db6cf2c03\x2d70e8\x2d674c\x2d1270\x2da209b954e77e.mount: Deactivated successfully. Mar 7 02:14:02.776277 containerd[1454]: time="2026-03-07T02:14:02.776180636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-pwfm7,Uid:746832be-1a83-49f8-83ca-d151d465a357,Namespace:calico-system,Attempt:1,}" Mar 7 02:14:02.890388 systemd-networkd[1381]: cali37d4ce041df: Link UP Mar 7 02:14:02.891729 systemd-networkd[1381]: cali37d4ce041df: Gained carrier Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.821 [INFO][4466] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0 calico-apiserver-67fc86959f- calico-system 746832be-1a83-49f8-83ca-d151d465a357 1035 0 2026-03-07 02:13:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fc86959f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67fc86959f-pwfm7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali37d4ce041df [] [] }} ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.822 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.846 [INFO][4478] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" HandleID="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.854 [INFO][4478] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" HandleID="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000470300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-67fc86959f-pwfm7", "timestamp":"2026-03-07 02:14:02.846284094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000472000)} Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.854 [INFO][4478] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.854 [INFO][4478] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.854 [INFO][4478] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.857 [INFO][4478] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.861 [INFO][4478] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.866 [INFO][4478] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.870 [INFO][4478] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.873 [INFO][4478] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.873 [INFO][4478] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.875 [INFO][4478] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.878 [INFO][4478] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.884 [INFO][4478] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.884 [INFO][4478] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" host="localhost" Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.884 [INFO][4478] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:02.909154 containerd[1454]: 2026-03-07 02:14:02.884 [INFO][4478] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" HandleID="k8s-pod-network.1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.909905 containerd[1454]: 2026-03-07 02:14:02.887 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"746832be-1a83-49f8-83ca-d151d465a357", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67fc86959f-pwfm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali37d4ce041df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:02.909905 containerd[1454]: 2026-03-07 02:14:02.887 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.909905 containerd[1454]: 2026-03-07 02:14:02.887 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37d4ce041df ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.909905 containerd[1454]: 2026-03-07 02:14:02.891 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.909905 containerd[1454]: 2026-03-07 02:14:02.892 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"746832be-1a83-49f8-83ca-d151d465a357", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf", Pod:"calico-apiserver-67fc86959f-pwfm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali37d4ce041df", MAC:"42:9a:43:39:7b:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:02.909905 containerd[1454]: 2026-03-07 02:14:02.905 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-pwfm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:02.934818 containerd[1454]: time="2026-03-07T02:14:02.933937200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:14:02.934818 containerd[1454]: time="2026-03-07T02:14:02.933984068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:14:02.934818 containerd[1454]: time="2026-03-07T02:14:02.933996921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:02.934818 containerd[1454]: time="2026-03-07T02:14:02.934100945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:02.959172 systemd[1]: Started cri-containerd-1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf.scope - libcontainer container 1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf. Mar 7 02:14:02.961092 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:54016.service - OpenSSH per-connection server daemon (10.0.0.1:54016). Mar 7 02:14:02.975874 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:14:02.996170 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 54016 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:02.998333 sshd[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:03.003224 systemd-logind[1442]: New session 9 of user core. Mar 7 02:14:03.006139 containerd[1454]: time="2026-03-07T02:14:03.006112348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-pwfm7,Uid:746832be-1a83-49f8-83ca-d151d465a357,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf\"" Mar 7 02:14:03.007643 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 02:14:03.010279 containerd[1454]: time="2026-03-07T02:14:03.010022928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 02:14:03.122745 sshd[4531]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:03.126497 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:54016.service: Deactivated successfully. Mar 7 02:14:03.128582 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 02:14:03.129265 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 7 02:14:03.130496 systemd-logind[1442]: Removed session 9. Mar 7 02:14:03.693840 containerd[1454]: time="2026-03-07T02:14:03.693780838Z" level=info msg="StopPodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\"" Mar 7 02:14:03.695199 containerd[1454]: time="2026-03-07T02:14:03.693791747Z" level=info msg="StopPodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\"" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4607] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" iface="eth0" netns="/var/run/netns/cni-fe544a03-b48e-922f-b45a-8f6b2f86fc9b" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" iface="eth0" netns="/var/run/netns/cni-fe544a03-b48e-922f-b45a-8f6b2f86fc9b" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" iface="eth0" netns="/var/run/netns/cni-fe544a03-b48e-922f-b45a-8f6b2f86fc9b" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4607] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4607] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.779 [INFO][4621] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.780 [INFO][4621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.780 [INFO][4621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.787 [WARNING][4621] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.787 [INFO][4621] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.788 [INFO][4621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:03.793156 containerd[1454]: 2026-03-07 02:14:03.790 [INFO][4607] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:03.796208 containerd[1454]: time="2026-03-07T02:14:03.795755958Z" level=info msg="TearDown network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" successfully" Mar 7 02:14:03.796208 containerd[1454]: time="2026-03-07T02:14:03.795783520Z" level=info msg="StopPodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" returns successfully" Mar 7 02:14:03.796662 systemd[1]: run-netns-cni\x2dfe544a03\x2db48e\x2d922f\x2db45a\x2d8f6b2f86fc9b.mount: Deactivated successfully. Mar 7 02:14:03.799316 containerd[1454]: time="2026-03-07T02:14:03.799294213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d48759d7-k2tjz,Uid:676a8c94-335a-4977-b910-64f7a6bc8f5e,Namespace:calico-system,Attempt:1,}" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.749 [INFO][4602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" iface="eth0" netns="/var/run/netns/cni-4cf84e77-40d2-8676-9040-c8dc8c3f95ef" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.750 [INFO][4602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" iface="eth0" netns="/var/run/netns/cni-4cf84e77-40d2-8676-9040-c8dc8c3f95ef" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.752 [INFO][4602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" iface="eth0" netns="/var/run/netns/cni-4cf84e77-40d2-8676-9040-c8dc8c3f95ef" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.752 [INFO][4602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.752 [INFO][4602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.782 [INFO][4623] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.782 [INFO][4623] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.788 [INFO][4623] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.797 [WARNING][4623] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.797 [INFO][4623] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.800 [INFO][4623] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:03.804558 containerd[1454]: 2026-03-07 02:14:03.802 [INFO][4602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:03.807143 containerd[1454]: time="2026-03-07T02:14:03.805200869Z" level=info msg="TearDown network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" successfully" Mar 7 02:14:03.807143 containerd[1454]: time="2026-03-07T02:14:03.805223471Z" level=info msg="StopPodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" returns successfully" Mar 7 02:14:03.807106 systemd[1]: run-netns-cni\x2d4cf84e77\x2d40d2\x2d8676\x2d9040\x2dc8dc8c3f95ef.mount: Deactivated successfully. Mar 7 02:14:03.808843 kubelet[2503]: E0307 02:14:03.808817 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:03.809910 containerd[1454]: time="2026-03-07T02:14:03.809877060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jd58h,Uid:55af8801-3665-4537-a222-72d6ad960f77,Namespace:kube-system,Attempt:1,}" Mar 7 02:14:03.939435 systemd-networkd[1381]: cali81f6c278a4b: Link UP Mar 7 02:14:03.940636 systemd-networkd[1381]: cali81f6c278a4b: Gained carrier Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.859 [INFO][4640] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0 calico-kube-controllers-67d48759d7- calico-system 676a8c94-335a-4977-b910-64f7a6bc8f5e 1045 0 2026-03-07 02:13:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67d48759d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67d48759d7-k2tjz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81f6c278a4b [] [] }} ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.860 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.896 [INFO][4667] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" HandleID="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.903 [INFO][4667] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" HandleID="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67d48759d7-k2tjz", "timestamp":"2026-03-07 02:14:03.896340698 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002182c0)} Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.904 [INFO][4667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.904 [INFO][4667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.904 [INFO][4667] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.907 [INFO][4667] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.911 [INFO][4667] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.917 [INFO][4667] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.919 [INFO][4667] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.921 [INFO][4667] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.921 [INFO][4667] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.923 [INFO][4667] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7 Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.927 [INFO][4667] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.934 [INFO][4667] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.934 [INFO][4667] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" host="localhost" Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.934 [INFO][4667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:03.954328 containerd[1454]: 2026-03-07 02:14:03.934 [INFO][4667] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" HandleID="k8s-pod-network.8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.955421 containerd[1454]: 2026-03-07 02:14:03.937 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0", GenerateName:"calico-kube-controllers-67d48759d7-", Namespace:"calico-system", SelfLink:"", UID:"676a8c94-335a-4977-b910-64f7a6bc8f5e", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d48759d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67d48759d7-k2tjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81f6c278a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:03.955421 containerd[1454]: 2026-03-07 02:14:03.937 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.955421 containerd[1454]: 2026-03-07 02:14:03.937 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81f6c278a4b ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.955421 containerd[1454]: 2026-03-07 02:14:03.939 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.955421 containerd[1454]: 2026-03-07 02:14:03.939 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0", GenerateName:"calico-kube-controllers-67d48759d7-", Namespace:"calico-system", SelfLink:"", UID:"676a8c94-335a-4977-b910-64f7a6bc8f5e", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d48759d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7", Pod:"calico-kube-controllers-67d48759d7-k2tjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81f6c278a4b", MAC:"62:a5:61:3f:c0:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:03.955421 containerd[1454]: 2026-03-07 02:14:03.949 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7" Namespace="calico-system" Pod="calico-kube-controllers-67d48759d7-k2tjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:03.983204 containerd[1454]: time="2026-03-07T02:14:03.982471938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:14:03.983204 containerd[1454]: time="2026-03-07T02:14:03.983176694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:14:03.983204 containerd[1454]: time="2026-03-07T02:14:03.983188665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:03.983589 containerd[1454]: time="2026-03-07T02:14:03.983268154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:04.006744 systemd[1]: Started cri-containerd-8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7.scope - libcontainer container 8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7. Mar 7 02:14:04.023182 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:14:04.056727 containerd[1454]: time="2026-03-07T02:14:04.056602951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d48759d7-k2tjz,Uid:676a8c94-335a-4977-b910-64f7a6bc8f5e,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7\"" Mar 7 02:14:04.059754 systemd-networkd[1381]: caliac6987d824c: Link UP Mar 7 02:14:04.062233 systemd-networkd[1381]: caliac6987d824c: Gained carrier Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.876 [INFO][4658] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--jd58h-eth0 coredns-7d764666f9- kube-system 55af8801-3665-4537-a222-72d6ad960f77 1044 0 2026-03-07 02:13:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-jd58h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac6987d824c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.877 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.906 [INFO][4674] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" HandleID="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.919 [INFO][4674] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" HandleID="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408990), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-jd58h", "timestamp":"2026-03-07 02:14:03.906106228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000210dc0)} Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.919 [INFO][4674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.934 [INFO][4674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:03.934 [INFO][4674] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.008 [INFO][4674] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.014 [INFO][4674] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.019 [INFO][4674] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.022 [INFO][4674] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.025 [INFO][4674] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.025 [INFO][4674] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.029 [INFO][4674] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294 Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.036 [INFO][4674] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.042 [INFO][4674] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.042 [INFO][4674] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" host="localhost" Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.042 [INFO][4674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:04.081649 containerd[1454]: 2026-03-07 02:14:04.043 [INFO][4674] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" HandleID="k8s-pod-network.91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.082137 containerd[1454]: 2026-03-07 02:14:04.051 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jd58h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"55af8801-3665-4537-a222-72d6ad960f77", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-jd58h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac6987d824c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:04.082137 containerd[1454]: 2026-03-07 02:14:04.052 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.082137 containerd[1454]: 2026-03-07 02:14:04.052 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac6987d824c ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.082137 containerd[1454]: 2026-03-07 02:14:04.063 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.082137 containerd[1454]: 2026-03-07 02:14:04.064 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jd58h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"55af8801-3665-4537-a222-72d6ad960f77", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294", Pod:"coredns-7d764666f9-jd58h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac6987d824c", MAC:"46:14:c5:ab:ce:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:04.082137 containerd[1454]: 2026-03-07 02:14:04.075 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294" Namespace="kube-system" Pod="coredns-7d764666f9-jd58h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:04.107539 containerd[1454]: time="2026-03-07T02:14:04.107349589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:14:04.107539 containerd[1454]: time="2026-03-07T02:14:04.107451599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:14:04.107713 containerd[1454]: time="2026-03-07T02:14:04.107466577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:04.107713 containerd[1454]: time="2026-03-07T02:14:04.107605928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:04.127715 systemd[1]: Started cri-containerd-91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294.scope - libcontainer container 91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294. Mar 7 02:14:04.141031 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:14:04.181590 containerd[1454]: time="2026-03-07T02:14:04.180466904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jd58h,Uid:55af8801-3665-4537-a222-72d6ad960f77,Namespace:kube-system,Attempt:1,} returns sandbox id \"91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294\"" Mar 7 02:14:04.181683 kubelet[2503]: E0307 02:14:04.181579 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:04.189728 containerd[1454]: time="2026-03-07T02:14:04.189659079Z" level=info msg="CreateContainer within sandbox \"91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:14:04.215449 containerd[1454]: time="2026-03-07T02:14:04.215307100Z" level=info msg="CreateContainer within sandbox \"91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0238e5749311a22f0e8f31337b6eb86bdd0f14b1077ab704704004f3b8b10001\"" Mar 7 02:14:04.217304 containerd[1454]: time="2026-03-07T02:14:04.216654529Z" level=info msg="StartContainer for \"0238e5749311a22f0e8f31337b6eb86bdd0f14b1077ab704704004f3b8b10001\"" Mar 7 02:14:04.246670 systemd[1]: Started cri-containerd-0238e5749311a22f0e8f31337b6eb86bdd0f14b1077ab704704004f3b8b10001.scope - libcontainer container 0238e5749311a22f0e8f31337b6eb86bdd0f14b1077ab704704004f3b8b10001. Mar 7 02:14:04.279905 containerd[1454]: time="2026-03-07T02:14:04.279753371Z" level=info msg="StartContainer for \"0238e5749311a22f0e8f31337b6eb86bdd0f14b1077ab704704004f3b8b10001\" returns successfully" Mar 7 02:14:04.555732 systemd-networkd[1381]: cali37d4ce041df: Gained IPv6LL Mar 7 02:14:04.607274 containerd[1454]: time="2026-03-07T02:14:04.607205332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:04.608022 containerd[1454]: time="2026-03-07T02:14:04.607974397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 02:14:04.609134 containerd[1454]: time="2026-03-07T02:14:04.609096210Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:04.611632 containerd[1454]: time="2026-03-07T02:14:04.611567159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:04.612352 containerd[1454]: time="2026-03-07T02:14:04.612298760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.602249383s" Mar 7 02:14:04.612352 containerd[1454]: time="2026-03-07T02:14:04.612338154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 02:14:04.613197 containerd[1454]: time="2026-03-07T02:14:04.613130951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 02:14:04.616889 containerd[1454]: time="2026-03-07T02:14:04.616852097Z" level=info msg="CreateContainer within sandbox \"1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 02:14:04.629926 containerd[1454]: time="2026-03-07T02:14:04.629891721Z" level=info msg="CreateContainer within sandbox \"1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d4ec6bb4648745ac21399e38cbe891e71e4181bb73474172799620119ed8fee\"" Mar 7 02:14:04.631571 containerd[1454]: time="2026-03-07T02:14:04.630241534Z" level=info msg="StartContainer for \"8d4ec6bb4648745ac21399e38cbe891e71e4181bb73474172799620119ed8fee\"" Mar 7 02:14:04.665661 systemd[1]: Started cri-containerd-8d4ec6bb4648745ac21399e38cbe891e71e4181bb73474172799620119ed8fee.scope - libcontainer container 8d4ec6bb4648745ac21399e38cbe891e71e4181bb73474172799620119ed8fee. Mar 7 02:14:04.714931 containerd[1454]: time="2026-03-07T02:14:04.714866632Z" level=info msg="StartContainer for \"8d4ec6bb4648745ac21399e38cbe891e71e4181bb73474172799620119ed8fee\" returns successfully" Mar 7 02:14:04.912248 kubelet[2503]: E0307 02:14:04.911786 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:04.934316 kubelet[2503]: I0307 02:14:04.934129 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-67fc86959f-pwfm7" podStartSLOduration=27.330850048 podStartE2EDuration="28.93411831s" podCreationTimestamp="2026-03-07 02:13:36 +0000 UTC" firstStartedPulling="2026-03-07 02:14:03.009777612 +0000 UTC m=+41.470070599" lastFinishedPulling="2026-03-07 02:14:04.613045875 +0000 UTC m=+43.073338861" observedRunningTime="2026-03-07 02:14:04.92102071 +0000 UTC m=+43.381313697" watchObservedRunningTime="2026-03-07 02:14:04.93411831 +0000 UTC m=+43.394411297" Mar 7 02:14:05.672114 kubelet[2503]: I0307 02:14:05.672030 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jd58h" podStartSLOduration=37.672017997 podStartE2EDuration="37.672017997s" podCreationTimestamp="2026-03-07 02:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:14:04.938311928 +0000 UTC m=+43.398604936" watchObservedRunningTime="2026-03-07 02:14:05.672017997 +0000 UTC m=+44.132310983" Mar 7 02:14:05.695581 containerd[1454]: time="2026-03-07T02:14:05.695472672Z" level=info msg="StopPodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\"" Mar 7 02:14:05.696370 containerd[1454]: time="2026-03-07T02:14:05.696142858Z" level=info msg="StopPodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\"" Mar 7 02:14:05.697153 containerd[1454]: time="2026-03-07T02:14:05.696988041Z" level=info msg="StopPodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\"" Mar 7 02:14:05.837727 systemd-networkd[1381]: caliac6987d824c: Gained IPv6LL Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.776 [INFO][4928] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.776 [INFO][4928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" iface="eth0" netns="/var/run/netns/cni-a2c674bb-69d9-9212-f05e-5587ead881c2" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.777 [INFO][4928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" iface="eth0" netns="/var/run/netns/cni-a2c674bb-69d9-9212-f05e-5587ead881c2" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.777 [INFO][4928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" iface="eth0" netns="/var/run/netns/cni-a2c674bb-69d9-9212-f05e-5587ead881c2" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.777 [INFO][4928] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.777 [INFO][4928] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.817 [INFO][4963] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.817 [INFO][4963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.817 [INFO][4963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.831 [WARNING][4963] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.831 [INFO][4963] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.834 [INFO][4963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:05.853194 containerd[1454]: 2026-03-07 02:14:05.839 [INFO][4928] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:05.856595 containerd[1454]: time="2026-03-07T02:14:05.855859080Z" level=info msg="TearDown network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" successfully" Mar 7 02:14:05.856595 containerd[1454]: time="2026-03-07T02:14:05.855885338Z" level=info msg="StopPodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" returns successfully" Mar 7 02:14:05.859473 systemd[1]: run-netns-cni\x2da2c674bb\x2d69d9\x2d9212\x2df05e\x2d5587ead881c2.mount: Deactivated successfully. Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.788 [INFO][4943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.788 [INFO][4943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" iface="eth0" netns="/var/run/netns/cni-3f39f6c0-f8a5-5a3f-0661-d6f531066d5d" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.789 [INFO][4943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" iface="eth0" netns="/var/run/netns/cni-3f39f6c0-f8a5-5a3f-0661-d6f531066d5d" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.789 [INFO][4943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" iface="eth0" netns="/var/run/netns/cni-3f39f6c0-f8a5-5a3f-0661-d6f531066d5d" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.789 [INFO][4943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.789 [INFO][4943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.851 [INFO][4971] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.851 [INFO][4971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.851 [INFO][4971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.858 [WARNING][4971] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.858 [INFO][4971] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.860 [INFO][4971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:05.868152 containerd[1454]: 2026-03-07 02:14:05.864 [INFO][4943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:05.870037 containerd[1454]: time="2026-03-07T02:14:05.869933136Z" level=info msg="TearDown network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" successfully" Mar 7 02:14:05.870037 containerd[1454]: time="2026-03-07T02:14:05.869959505Z" level=info msg="StopPodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" returns successfully" Mar 7 02:14:05.870839 systemd[1]: run-netns-cni\x2d3f39f6c0\x2df8a5\x2d5a3f\x2d0661\x2dd6f531066d5d.mount: Deactivated successfully. Mar 7 02:14:05.900083 containerd[1454]: time="2026-03-07T02:14:05.900059907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-bhm6g,Uid:400859bd-9f1f-404b-b164-62fa2410895c,Namespace:calico-system,Attempt:1,}" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.765 [INFO][4929] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.766 [INFO][4929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" iface="eth0" netns="/var/run/netns/cni-664eeee2-c5d8-86c7-fd64-aad339f95827" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.768 [INFO][4929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" iface="eth0" netns="/var/run/netns/cni-664eeee2-c5d8-86c7-fd64-aad339f95827" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.768 [INFO][4929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" iface="eth0" netns="/var/run/netns/cni-664eeee2-c5d8-86c7-fd64-aad339f95827" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.768 [INFO][4929] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.768 [INFO][4929] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.835 [INFO][4958] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.835 [INFO][4958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.835 [INFO][4958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.844 [WARNING][4958] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.844 [INFO][4958] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.846 [INFO][4958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:05.903129 containerd[1454]: 2026-03-07 02:14:05.857 [INFO][4929] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:05.906087 containerd[1454]: time="2026-03-07T02:14:05.905955930Z" level=info msg="TearDown network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" successfully" Mar 7 02:14:05.906087 containerd[1454]: time="2026-03-07T02:14:05.905988171Z" level=info msg="StopPodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" returns successfully" Mar 7 02:14:05.906161 kubelet[2503]: E0307 02:14:05.906132 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:05.906711 containerd[1454]: time="2026-03-07T02:14:05.906674124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gc87b,Uid:49673b6d-eace-4732-9c08-550044a6a02f,Namespace:kube-system,Attempt:1,}" Mar 7 02:14:05.907360 systemd[1]: run-netns-cni\x2d664eeee2\x2dc5d8\x2d86c7\x2dfd64\x2daad339f95827.mount: Deactivated successfully. Mar 7 02:14:05.909981 containerd[1454]: time="2026-03-07T02:14:05.909920896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9fkdf,Uid:0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab,Namespace:calico-system,Attempt:1,}" Mar 7 02:14:05.914472 kubelet[2503]: E0307 02:14:05.914328 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:05.965480 systemd-networkd[1381]: cali81f6c278a4b: Gained IPv6LL Mar 7 02:14:06.120709 systemd-networkd[1381]: calie4d5cbdc1d9: Link UP Mar 7 02:14:06.120947 systemd-networkd[1381]: calie4d5cbdc1d9: Gained carrier Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:05.999 [INFO][4997] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--gc87b-eth0 coredns-7d764666f9- kube-system 49673b6d-eace-4732-9c08-550044a6a02f 1091 0 2026-03-07 02:13:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-gc87b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4d5cbdc1d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.001 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.058 [INFO][5042] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" HandleID="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.066 [INFO][5042] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" HandleID="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b06a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-gc87b", "timestamp":"2026-03-07 02:14:06.058912666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000290840)} Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.066 [INFO][5042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.066 [INFO][5042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.066 [INFO][5042] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.074 [INFO][5042] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.085 [INFO][5042] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.091 [INFO][5042] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.094 [INFO][5042] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.096 [INFO][5042] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.097 [INFO][5042] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.098 [INFO][5042] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4 Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.103 [INFO][5042] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.111 [INFO][5042] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.111 [INFO][5042] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" host="localhost" Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.111 [INFO][5042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:06.160543 containerd[1454]: 2026-03-07 02:14:06.111 [INFO][5042] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" HandleID="k8s-pod-network.044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.161729 containerd[1454]: 2026-03-07 02:14:06.115 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--gc87b-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"49673b6d-eace-4732-9c08-550044a6a02f", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-gc87b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4d5cbdc1d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:06.161729 containerd[1454]: 2026-03-07 02:14:06.115 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.161729 containerd[1454]: 2026-03-07 02:14:06.115 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4d5cbdc1d9 ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.161729 containerd[1454]: 2026-03-07 02:14:06.118 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.161729 containerd[1454]: 2026-03-07 02:14:06.118 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--gc87b-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"49673b6d-eace-4732-9c08-550044a6a02f", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4", Pod:"coredns-7d764666f9-gc87b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4d5cbdc1d9", MAC:"0a:a8:b8:d3:65:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:06.161729 containerd[1454]: 2026-03-07 02:14:06.143 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4" Namespace="kube-system" Pod="coredns-7d764666f9-gc87b" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:06.250573 containerd[1454]: time="2026-03-07T02:14:06.249456059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:14:06.250573 containerd[1454]: time="2026-03-07T02:14:06.249568599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:14:06.250573 containerd[1454]: time="2026-03-07T02:14:06.249582396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:06.250573 containerd[1454]: time="2026-03-07T02:14:06.249681410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:06.260102 systemd-networkd[1381]: cali0dbd50f9ebd: Link UP Mar 7 02:14:06.261643 systemd-networkd[1381]: cali0dbd50f9ebd: Gained carrier Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:05.985 [INFO][4986] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0 calico-apiserver-67fc86959f- calico-system 400859bd-9f1f-404b-b164-62fa2410895c 1090 0 2026-03-07 02:13:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fc86959f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67fc86959f-bhm6g eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0dbd50f9ebd [] [] }} ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:05.985 [INFO][4986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.057 [INFO][5030] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" HandleID="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.072 [INFO][5030] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" HandleID="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f0230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-67fc86959f-bhm6g", "timestamp":"2026-03-07 02:14:06.057857953 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000136840)} Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.072 [INFO][5030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.111 [INFO][5030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.111 [INFO][5030] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.177 [INFO][5030] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.195 [INFO][5030] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.213 [INFO][5030] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.219 [INFO][5030] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.225 [INFO][5030] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.225 [INFO][5030] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.227 [INFO][5030] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.236 [INFO][5030] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.246 [INFO][5030] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.246 [INFO][5030] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" host="localhost" Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.246 [INFO][5030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:06.280146 containerd[1454]: 2026-03-07 02:14:06.246 [INFO][5030] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" HandleID="k8s-pod-network.099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.280741 containerd[1454]: 2026-03-07 02:14:06.252 [INFO][4986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"400859bd-9f1f-404b-b164-62fa2410895c", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67fc86959f-bhm6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0dbd50f9ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:06.280741 containerd[1454]: 2026-03-07 02:14:06.252 [INFO][4986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.280741 containerd[1454]: 2026-03-07 02:14:06.252 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dbd50f9ebd ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.280741 containerd[1454]: 2026-03-07 02:14:06.261 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.280741 containerd[1454]: 2026-03-07 02:14:06.261 [INFO][4986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"400859bd-9f1f-404b-b164-62fa2410895c", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d", Pod:"calico-apiserver-67fc86959f-bhm6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0dbd50f9ebd", MAC:"42:fa:3d:dd:42:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:06.280741 containerd[1454]: 2026-03-07 02:14:06.272 [INFO][4986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d" Namespace="calico-system" Pod="calico-apiserver-67fc86959f-bhm6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:06.290690 systemd[1]: Started cri-containerd-044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4.scope - libcontainer container 044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4. Mar 7 02:14:06.311044 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:14:06.326062 containerd[1454]: time="2026-03-07T02:14:06.324805844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:14:06.326062 containerd[1454]: time="2026-03-07T02:14:06.325676995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:14:06.326062 containerd[1454]: time="2026-03-07T02:14:06.325712251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:06.326062 containerd[1454]: time="2026-03-07T02:14:06.325957549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:06.354491 systemd-networkd[1381]: cali50309f4a100: Link UP Mar 7 02:14:06.356193 systemd-networkd[1381]: cali50309f4a100: Gained carrier Mar 7 02:14:06.367689 systemd[1]: Started cri-containerd-099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d.scope - libcontainer container 099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d. Mar 7 02:14:06.372921 containerd[1454]: time="2026-03-07T02:14:06.372884423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gc87b,Uid:49673b6d-eace-4732-9c08-550044a6a02f,Namespace:kube-system,Attempt:1,} returns sandbox id \"044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4\"" Mar 7 02:14:06.373576 kubelet[2503]: E0307 02:14:06.373430 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:06.389561 containerd[1454]: time="2026-03-07T02:14:06.388689779Z" level=info msg="CreateContainer within sandbox \"044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.041 [INFO][5012] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0 goldmane-9f7667bb8- calico-system 0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab 1089 0 2026-03-07 02:13:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-9fkdf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali50309f4a100 [] [] }} ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.042 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.103 [INFO][5051] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" HandleID="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.126 [INFO][5051] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" HandleID="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004036f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-9fkdf", "timestamp":"2026-03-07 02:14:06.103385848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001bf1e0)} Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.126 [INFO][5051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.246 [INFO][5051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.247 [INFO][5051] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.278 [INFO][5051] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.294 [INFO][5051] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.308 [INFO][5051] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.310 [INFO][5051] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.313 [INFO][5051] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.313 [INFO][5051] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.316 [INFO][5051] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1 Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.325 [INFO][5051] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.333 [INFO][5051] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.333 [INFO][5051] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" host="localhost" Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.333 [INFO][5051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:06.393942 containerd[1454]: 2026-03-07 02:14:06.334 [INFO][5051] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" HandleID="k8s-pod-network.4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.394462 containerd[1454]: 2026-03-07 02:14:06.349 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-9fkdf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50309f4a100", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:06.394462 containerd[1454]: 2026-03-07 02:14:06.350 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.394462 containerd[1454]: 2026-03-07 02:14:06.350 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50309f4a100 ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.394462 containerd[1454]: 2026-03-07 02:14:06.357 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.394462 containerd[1454]: 2026-03-07 02:14:06.359 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1", Pod:"goldmane-9f7667bb8-9fkdf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50309f4a100", MAC:"da:2c:75:bc:20:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:06.394462 containerd[1454]: 2026-03-07 02:14:06.378 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1" Namespace="calico-system" Pod="goldmane-9f7667bb8-9fkdf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:06.405357 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:14:06.420738 containerd[1454]: time="2026-03-07T02:14:06.420708818Z" level=info msg="CreateContainer within sandbox \"044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0888b6c7ae4ada2a250d037996fb02351a2c83a4a5b50ba112038baa7f996b70\"" Mar 7 02:14:06.421897 containerd[1454]: time="2026-03-07T02:14:06.421842258Z" level=info msg="StartContainer for \"0888b6c7ae4ada2a250d037996fb02351a2c83a4a5b50ba112038baa7f996b70\"" Mar 7 02:14:06.434321 containerd[1454]: time="2026-03-07T02:14:06.434159608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:14:06.434321 containerd[1454]: time="2026-03-07T02:14:06.434267289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:14:06.434321 containerd[1454]: time="2026-03-07T02:14:06.434279031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:06.434777 containerd[1454]: time="2026-03-07T02:14:06.434606822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:14:06.455786 containerd[1454]: time="2026-03-07T02:14:06.455697635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc86959f-bhm6g,Uid:400859bd-9f1f-404b-b164-62fa2410895c,Namespace:calico-system,Attempt:1,} returns sandbox id \"099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d\"" Mar 7 02:14:06.463362 containerd[1454]: time="2026-03-07T02:14:06.463312631Z" level=info msg="CreateContainer within sandbox \"099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 02:14:06.479462 systemd[1]: Started cri-containerd-4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1.scope - libcontainer container 4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1. Mar 7 02:14:06.489748 systemd[1]: Started cri-containerd-0888b6c7ae4ada2a250d037996fb02351a2c83a4a5b50ba112038baa7f996b70.scope - libcontainer container 0888b6c7ae4ada2a250d037996fb02351a2c83a4a5b50ba112038baa7f996b70. Mar 7 02:14:06.494367 containerd[1454]: time="2026-03-07T02:14:06.493558084Z" level=info msg="CreateContainer within sandbox \"099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"92fbaa756bae32d3046c74d0127c309f976ea84573860ba69c83e3c804d21101\"" Mar 7 02:14:06.494474 containerd[1454]: time="2026-03-07T02:14:06.494367340Z" level=info msg="StartContainer for \"92fbaa756bae32d3046c74d0127c309f976ea84573860ba69c83e3c804d21101\"" Mar 7 02:14:06.520380 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:14:06.546970 containerd[1454]: time="2026-03-07T02:14:06.546907518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:06.548570 containerd[1454]: time="2026-03-07T02:14:06.548487897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 02:14:06.548844 systemd[1]: Started cri-containerd-92fbaa756bae32d3046c74d0127c309f976ea84573860ba69c83e3c804d21101.scope - libcontainer container 92fbaa756bae32d3046c74d0127c309f976ea84573860ba69c83e3c804d21101. Mar 7 02:14:06.550889 containerd[1454]: time="2026-03-07T02:14:06.550857702Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:06.552261 containerd[1454]: time="2026-03-07T02:14:06.552161906Z" level=info msg="StartContainer for \"0888b6c7ae4ada2a250d037996fb02351a2c83a4a5b50ba112038baa7f996b70\" returns successfully" Mar 7 02:14:06.561335 containerd[1454]: time="2026-03-07T02:14:06.561310285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:06.562651 containerd[1454]: time="2026-03-07T02:14:06.562610923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.949437723s" Mar 7 02:14:06.562651 containerd[1454]: time="2026-03-07T02:14:06.562637632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 02:14:06.579421 containerd[1454]: time="2026-03-07T02:14:06.579344632Z" level=info msg="CreateContainer within sandbox \"8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 02:14:06.579964 containerd[1454]: time="2026-03-07T02:14:06.579944521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9fkdf,Uid:0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1\"" Mar 7 02:14:06.583848 containerd[1454]: time="2026-03-07T02:14:06.583789385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 02:14:06.598196 containerd[1454]: time="2026-03-07T02:14:06.598134113Z" level=info msg="CreateContainer within sandbox \"8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"da0c366a3c9f0b886a1af86bc6e6ccca5c762be935cde3485606cfd5c95841a4\"" Mar 7 02:14:06.599535 containerd[1454]: time="2026-03-07T02:14:06.598787569Z" level=info msg="StartContainer for \"da0c366a3c9f0b886a1af86bc6e6ccca5c762be935cde3485606cfd5c95841a4\"" Mar 7 02:14:06.636713 systemd[1]: Started cri-containerd-da0c366a3c9f0b886a1af86bc6e6ccca5c762be935cde3485606cfd5c95841a4.scope - libcontainer container da0c366a3c9f0b886a1af86bc6e6ccca5c762be935cde3485606cfd5c95841a4. Mar 7 02:14:06.651167 containerd[1454]: time="2026-03-07T02:14:06.651093298Z" level=info msg="StartContainer for \"92fbaa756bae32d3046c74d0127c309f976ea84573860ba69c83e3c804d21101\" returns successfully" Mar 7 02:14:06.698641 containerd[1454]: time="2026-03-07T02:14:06.698298101Z" level=info msg="StartContainer for \"da0c366a3c9f0b886a1af86bc6e6ccca5c762be935cde3485606cfd5c95841a4\" returns successfully" Mar 7 02:14:06.933240 kubelet[2503]: I0307 02:14:06.933138 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67d48759d7-k2tjz" podStartSLOduration=27.426706147 podStartE2EDuration="29.933127311s" podCreationTimestamp="2026-03-07 02:13:37 +0000 UTC" firstStartedPulling="2026-03-07 02:14:04.058958908 +0000 UTC m=+42.519251905" lastFinishedPulling="2026-03-07 02:14:06.565380083 +0000 UTC m=+45.025673069" observedRunningTime="2026-03-07 02:14:06.932142051 +0000 UTC m=+45.392435039" watchObservedRunningTime="2026-03-07 02:14:06.933127311 +0000 UTC m=+45.393420298" Mar 7 02:14:06.938222 kubelet[2503]: E0307 02:14:06.937651 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:06.939647 kubelet[2503]: E0307 02:14:06.939388 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:06.948317 kubelet[2503]: I0307 02:14:06.948268 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-67fc86959f-bhm6g" podStartSLOduration=30.948260161 podStartE2EDuration="30.948260161s" podCreationTimestamp="2026-03-07 02:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:14:06.946738851 +0000 UTC m=+45.407031838" watchObservedRunningTime="2026-03-07 02:14:06.948260161 +0000 UTC m=+45.408553148" Mar 7 02:14:07.006965 kubelet[2503]: I0307 02:14:07.006916 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gc87b" podStartSLOduration=39.006904549 podStartE2EDuration="39.006904549s" podCreationTimestamp="2026-03-07 02:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:14:06.972033502 +0000 UTC m=+45.432326488" watchObservedRunningTime="2026-03-07 02:14:07.006904549 +0000 UTC m=+45.467197536" Mar 7 02:14:07.693240 systemd-networkd[1381]: cali50309f4a100: Gained IPv6LL Mar 7 02:14:07.836776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205397839.mount: Deactivated successfully. Mar 7 02:14:07.885166 systemd-networkd[1381]: cali0dbd50f9ebd: Gained IPv6LL Mar 7 02:14:07.939372 kubelet[2503]: E0307 02:14:07.939319 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:14:07.939917 kubelet[2503]: I0307 02:14:07.939747 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:14:08.011713 systemd-networkd[1381]: calie4d5cbdc1d9: Gained IPv6LL Mar 7 02:14:08.143326 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Mar 7 02:14:08.264878 containerd[1454]: time="2026-03-07T02:14:08.264719403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:08.265741 containerd[1454]: time="2026-03-07T02:14:08.265687030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 02:14:08.267221 containerd[1454]: time="2026-03-07T02:14:08.267174527Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:08.269805 containerd[1454]: time="2026-03-07T02:14:08.269748717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:14:08.270557 containerd[1454]: time="2026-03-07T02:14:08.270493598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.686656294s" Mar 7 02:14:08.270597 containerd[1454]: time="2026-03-07T02:14:08.270562046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 02:14:08.274939 containerd[1454]: time="2026-03-07T02:14:08.274905970Z" level=info msg="CreateContainer within sandbox \"4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 02:14:08.285577 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:08.287027 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:08.288754 containerd[1454]: time="2026-03-07T02:14:08.288692645Z" level=info msg="CreateContainer within sandbox \"4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7599e5d653c75056d754e0be9b786004bab1242b68504473763d41db4e8ce0eb\"" Mar 7 02:14:08.289584 containerd[1454]: time="2026-03-07T02:14:08.289465711Z" level=info msg="StartContainer for \"7599e5d653c75056d754e0be9b786004bab1242b68504473763d41db4e8ce0eb\"" Mar 7 02:14:08.294311 systemd-logind[1442]: New session 10 of user core. Mar 7 02:14:08.298660 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:14:08.327665 systemd[1]: Started cri-containerd-7599e5d653c75056d754e0be9b786004bab1242b68504473763d41db4e8ce0eb.scope - libcontainer container 7599e5d653c75056d754e0be9b786004bab1242b68504473763d41db4e8ce0eb. Mar 7 02:14:08.375182 containerd[1454]: time="2026-03-07T02:14:08.374995698Z" level=info msg="StartContainer for \"7599e5d653c75056d754e0be9b786004bab1242b68504473763d41db4e8ce0eb\" returns successfully" Mar 7 02:14:08.564723 sshd[5437]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:08.574743 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:54020.service: Deactivated successfully. Mar 7 02:14:08.576487 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:14:08.578187 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:14:08.579695 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Mar 7 02:14:08.580968 systemd-logind[1442]: Removed session 10. Mar 7 02:14:08.624210 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:08.625720 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:08.632781 systemd-logind[1442]: New session 11 of user core. Mar 7 02:14:08.636925 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:14:08.797710 sshd[5494]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:08.808751 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:54036.service: Deactivated successfully. Mar 7 02:14:08.811166 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:14:08.815274 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:14:08.824962 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:54038.service - OpenSSH per-connection server daemon (10.0.0.1:54038). Mar 7 02:14:08.826494 systemd-logind[1442]: Removed session 11. Mar 7 02:14:08.854077 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 54038 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:08.855822 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:08.860455 systemd-logind[1442]: New session 12 of user core. Mar 7 02:14:08.868679 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:14:08.986992 sshd[5507]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:08.990148 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:54038.service: Deactivated successfully. Mar 7 02:14:08.992307 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:14:08.994235 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:14:08.996050 systemd-logind[1442]: Removed session 12. Mar 7 02:14:09.945227 kubelet[2503]: I0307 02:14:09.945181 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:14:11.291345 kubelet[2503]: I0307 02:14:11.291294 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:14:13.300626 kubelet[2503]: I0307 02:14:13.300465 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:14:13.317759 kubelet[2503]: I0307 02:14:13.316709 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-9fkdf" podStartSLOduration=34.628809649 podStartE2EDuration="36.316700048s" podCreationTimestamp="2026-03-07 02:13:37 +0000 UTC" firstStartedPulling="2026-03-07 02:14:06.583314227 +0000 UTC m=+45.043607224" lastFinishedPulling="2026-03-07 02:14:08.271204635 +0000 UTC m=+46.731497623" observedRunningTime="2026-03-07 02:14:08.953764545 +0000 UTC m=+47.414057531" watchObservedRunningTime="2026-03-07 02:14:13.316700048 +0000 UTC m=+51.776993035" Mar 7 02:14:13.998237 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:35758.service - OpenSSH per-connection server daemon (10.0.0.1:35758). Mar 7 02:14:14.046154 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 35758 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:14.047544 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:14.051823 systemd-logind[1442]: New session 13 of user core. Mar 7 02:14:14.061655 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:14:14.177891 sshd[5593]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:14.185807 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:35758.service: Deactivated successfully. Mar 7 02:14:14.187782 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:14:14.189296 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:14:14.194978 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:35762.service - OpenSSH per-connection server daemon (10.0.0.1:35762). Mar 7 02:14:14.196430 systemd-logind[1442]: Removed session 13. Mar 7 02:14:14.235821 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 35762 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:14.237699 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:14.242099 systemd-logind[1442]: New session 14 of user core. Mar 7 02:14:14.256751 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:14:14.495961 sshd[5607]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:14.502228 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:35762.service: Deactivated successfully. Mar 7 02:14:14.504047 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:14:14.505444 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:14:14.511762 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:35768.service - OpenSSH per-connection server daemon (10.0.0.1:35768). Mar 7 02:14:14.513081 systemd-logind[1442]: Removed session 14. Mar 7 02:14:14.546726 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 35768 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:14.548113 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:14.553066 systemd-logind[1442]: New session 15 of user core. Mar 7 02:14:14.560714 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:14:15.074310 sshd[5619]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:15.082717 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:35768.service: Deactivated successfully. Mar 7 02:14:15.084436 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:14:15.085222 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:14:15.100098 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:35784.service - OpenSSH per-connection server daemon (10.0.0.1:35784). Mar 7 02:14:15.103601 systemd-logind[1442]: Removed session 15. Mar 7 02:14:15.137959 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 35784 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:15.139312 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:15.144303 systemd-logind[1442]: New session 16 of user core. Mar 7 02:14:15.154676 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:14:15.417731 sshd[5645]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:15.425937 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:35784.service: Deactivated successfully. Mar 7 02:14:15.427781 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:14:15.430444 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:14:15.437870 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:35798.service - OpenSSH per-connection server daemon (10.0.0.1:35798). Mar 7 02:14:15.438977 systemd-logind[1442]: Removed session 16. Mar 7 02:14:15.466556 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 35798 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:15.468141 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:15.472366 systemd-logind[1442]: New session 17 of user core. Mar 7 02:14:15.480664 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:14:15.631435 sshd[5658]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:15.635617 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:35798.service: Deactivated successfully. Mar 7 02:14:15.637728 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:14:15.638623 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:14:15.639847 systemd-logind[1442]: Removed session 17. Mar 7 02:14:20.642495 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:38616.service - OpenSSH per-connection server daemon (10.0.0.1:38616). Mar 7 02:14:20.685909 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 38616 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:20.687666 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:20.692301 systemd-logind[1442]: New session 18 of user core. Mar 7 02:14:20.699681 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:14:20.836769 sshd[5703]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:20.839953 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:38616.service: Deactivated successfully. Mar 7 02:14:20.841896 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:14:20.843612 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:14:20.844914 systemd-logind[1442]: Removed session 18. Mar 7 02:14:21.683295 containerd[1454]: time="2026-03-07T02:14:21.682971443Z" level=info msg="StopPodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\"" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.725 [WARNING][5726] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--gc87b-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"49673b6d-eace-4732-9c08-550044a6a02f", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4", Pod:"coredns-7d764666f9-gc87b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4d5cbdc1d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.726 [INFO][5726] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.726 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" iface="eth0" netns="" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.726 [INFO][5726] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.726 [INFO][5726] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.772 [INFO][5737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.774 [INFO][5737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.774 [INFO][5737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.782 [WARNING][5737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.782 [INFO][5737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.785 [INFO][5737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:21.792133 containerd[1454]: 2026-03-07 02:14:21.788 [INFO][5726] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.792669 containerd[1454]: time="2026-03-07T02:14:21.792132544Z" level=info msg="TearDown network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" successfully" Mar 7 02:14:21.792669 containerd[1454]: time="2026-03-07T02:14:21.792152992Z" level=info msg="StopPodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" returns successfully" Mar 7 02:14:21.858002 containerd[1454]: time="2026-03-07T02:14:21.857935754Z" level=info msg="RemovePodSandbox for \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\"" Mar 7 02:14:21.860364 containerd[1454]: time="2026-03-07T02:14:21.860294187Z" level=info msg="Forcibly stopping sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\"" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.900 [WARNING][5755] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--gc87b-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"49673b6d-eace-4732-9c08-550044a6a02f", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"044aa238b67cbe028bc3fe4612111cb9f9855d17e14de6b39b5420f7d6865fb4", Pod:"coredns-7d764666f9-gc87b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4d5cbdc1d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.901 [INFO][5755] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.901 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" iface="eth0" netns="" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.901 [INFO][5755] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.901 [INFO][5755] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.928 [INFO][5765] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.929 [INFO][5765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.929 [INFO][5765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.936 [WARNING][5765] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.936 [INFO][5765] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" HandleID="k8s-pod-network.a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Workload="localhost-k8s-coredns--7d764666f9--gc87b-eth0" Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.937 [INFO][5765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:21.943469 containerd[1454]: 2026-03-07 02:14:21.940 [INFO][5755] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e" Mar 7 02:14:21.943469 containerd[1454]: time="2026-03-07T02:14:21.943386652Z" level=info msg="TearDown network for sandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" successfully" Mar 7 02:14:21.964561 containerd[1454]: time="2026-03-07T02:14:21.964378592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:21.964561 containerd[1454]: time="2026-03-07T02:14:21.964553257Z" level=info msg="RemovePodSandbox \"a40c6bda0a262557d60f6f8e7f9f1ed4f02665f2f7298242fb239f2e9e61469e\" returns successfully" Mar 7 02:14:21.972773 containerd[1454]: time="2026-03-07T02:14:21.972707368Z" level=info msg="StopPodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\"" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.015 [WARNING][5785] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" WorkloadEndpoint="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.015 [INFO][5785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.015 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" iface="eth0" netns="" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.015 [INFO][5785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.015 [INFO][5785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.041 [INFO][5793] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.041 [INFO][5793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.041 [INFO][5793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.048 [WARNING][5793] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.048 [INFO][5793] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.050 [INFO][5793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.054969 containerd[1454]: 2026-03-07 02:14:22.052 [INFO][5785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.055966 containerd[1454]: time="2026-03-07T02:14:22.054970745Z" level=info msg="TearDown network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" successfully" Mar 7 02:14:22.055966 containerd[1454]: time="2026-03-07T02:14:22.054996212Z" level=info msg="StopPodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" returns successfully" Mar 7 02:14:22.055966 containerd[1454]: time="2026-03-07T02:14:22.055444936Z" level=info msg="RemovePodSandbox for \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\"" Mar 7 02:14:22.055966 containerd[1454]: time="2026-03-07T02:14:22.055481734Z" level=info msg="Forcibly stopping sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\"" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.093 [WARNING][5810] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" WorkloadEndpoint="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.093 [INFO][5810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.093 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" iface="eth0" netns="" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.093 [INFO][5810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.093 [INFO][5810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.119 [INFO][5820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.119 [INFO][5820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.120 [INFO][5820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.125 [WARNING][5820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.125 [INFO][5820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" HandleID="k8s-pod-network.411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Workload="localhost-k8s-whisker--9d799687--rnx8g-eth0" Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.126 [INFO][5820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.132110 containerd[1454]: 2026-03-07 02:14:22.129 [INFO][5810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58" Mar 7 02:14:22.132452 containerd[1454]: time="2026-03-07T02:14:22.132170954Z" level=info msg="TearDown network for sandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" successfully" Mar 7 02:14:22.138177 containerd[1454]: time="2026-03-07T02:14:22.138136990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:22.138422 containerd[1454]: time="2026-03-07T02:14:22.138208603Z" level=info msg="RemovePodSandbox \"411fd88750910023636b1525d9f426f8477c806cddcde2f70ad10fbb3971cb58\" returns successfully" Mar 7 02:14:22.138830 containerd[1454]: time="2026-03-07T02:14:22.138796060Z" level=info msg="StopPodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\"" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.177 [WARNING][5839] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0", GenerateName:"calico-kube-controllers-67d48759d7-", Namespace:"calico-system", SelfLink:"", UID:"676a8c94-335a-4977-b910-64f7a6bc8f5e", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d48759d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7", Pod:"calico-kube-controllers-67d48759d7-k2tjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81f6c278a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.178 [INFO][5839] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.178 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" iface="eth0" netns="" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.178 [INFO][5839] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.178 [INFO][5839] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.202 [INFO][5848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.202 [INFO][5848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.202 [INFO][5848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.207 [WARNING][5848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.207 [INFO][5848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.209 [INFO][5848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.215037 containerd[1454]: 2026-03-07 02:14:22.212 [INFO][5839] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.215037 containerd[1454]: time="2026-03-07T02:14:22.214957987Z" level=info msg="TearDown network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" successfully" Mar 7 02:14:22.215037 containerd[1454]: time="2026-03-07T02:14:22.214981080Z" level=info msg="StopPodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" returns successfully" Mar 7 02:14:22.215818 containerd[1454]: time="2026-03-07T02:14:22.215747203Z" level=info msg="RemovePodSandbox for \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\"" Mar 7 02:14:22.215818 containerd[1454]: time="2026-03-07T02:14:22.215799240Z" level=info msg="Forcibly stopping sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\"" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.256 [WARNING][5865] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0", GenerateName:"calico-kube-controllers-67d48759d7-", Namespace:"calico-system", SelfLink:"", UID:"676a8c94-335a-4977-b910-64f7a6bc8f5e", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d48759d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f74788c38fc97e9aebfa2f531d0f4abe6d459af5411b08072bc69fab30769f7", Pod:"calico-kube-controllers-67d48759d7-k2tjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81f6c278a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.256 [INFO][5865] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.256 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" iface="eth0" netns="" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.256 [INFO][5865] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.256 [INFO][5865] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.283 [INFO][5873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.283 [INFO][5873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.283 [INFO][5873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.289 [WARNING][5873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.289 [INFO][5873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" HandleID="k8s-pod-network.1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Workload="localhost-k8s-calico--kube--controllers--67d48759d7--k2tjz-eth0" Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.290 [INFO][5873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.296437 containerd[1454]: 2026-03-07 02:14:22.293 [INFO][5865] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a" Mar 7 02:14:22.296833 containerd[1454]: time="2026-03-07T02:14:22.296451199Z" level=info msg="TearDown network for sandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" successfully" Mar 7 02:14:22.302171 containerd[1454]: time="2026-03-07T02:14:22.302135247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:22.302265 containerd[1454]: time="2026-03-07T02:14:22.302201712Z" level=info msg="RemovePodSandbox \"1e89a704f1af359b1b37cd827bfc07cd0d2d17b3ef8574b209d760c040f15f6a\" returns successfully" Mar 7 02:14:22.302675 containerd[1454]: time="2026-03-07T02:14:22.302640905Z" level=info msg="StopPodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\"" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.340 [WARNING][5891] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"746832be-1a83-49f8-83ca-d151d465a357", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf", Pod:"calico-apiserver-67fc86959f-pwfm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali37d4ce041df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.340 [INFO][5891] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.340 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" iface="eth0" netns="" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.340 [INFO][5891] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.340 [INFO][5891] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.364 [INFO][5899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.365 [INFO][5899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.365 [INFO][5899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.370 [WARNING][5899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.370 [INFO][5899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.371 [INFO][5899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.376952 containerd[1454]: 2026-03-07 02:14:22.374 [INFO][5891] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.377461 containerd[1454]: time="2026-03-07T02:14:22.377021402Z" level=info msg="TearDown network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" successfully" Mar 7 02:14:22.377461 containerd[1454]: time="2026-03-07T02:14:22.377044896Z" level=info msg="StopPodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" returns successfully" Mar 7 02:14:22.377621 containerd[1454]: time="2026-03-07T02:14:22.377543697Z" level=info msg="RemovePodSandbox for \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\"" Mar 7 02:14:22.377621 containerd[1454]: time="2026-03-07T02:14:22.377566750Z" level=info msg="Forcibly stopping sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\"" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.416 [WARNING][5917] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"746832be-1a83-49f8-83ca-d151d465a357", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d2af0a620e9114eda05ae5befe2f3d49daf9d8a2a9f5e487470bebea23ecdbf", Pod:"calico-apiserver-67fc86959f-pwfm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali37d4ce041df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.417 [INFO][5917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.417 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" iface="eth0" netns="" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.417 [INFO][5917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.417 [INFO][5917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.440 [INFO][5925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.440 [INFO][5925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.440 [INFO][5925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.446 [WARNING][5925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.446 [INFO][5925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" HandleID="k8s-pod-network.caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Workload="localhost-k8s-calico--apiserver--67fc86959f--pwfm7-eth0" Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.448 [INFO][5925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.452810 containerd[1454]: 2026-03-07 02:14:22.450 [INFO][5917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e" Mar 7 02:14:22.453166 containerd[1454]: time="2026-03-07T02:14:22.452828836Z" level=info msg="TearDown network for sandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" successfully" Mar 7 02:14:22.458023 containerd[1454]: time="2026-03-07T02:14:22.457963507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:22.458126 containerd[1454]: time="2026-03-07T02:14:22.458075016Z" level=info msg="RemovePodSandbox \"caac0aa62556c16fda0a39af169fff2f57531e717442a2f642a41b9c46336c5e\" returns successfully" Mar 7 02:14:22.458661 containerd[1454]: time="2026-03-07T02:14:22.458635845Z" level=info msg="StopPodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\"" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.496 [WARNING][5943] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jd58h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"55af8801-3665-4537-a222-72d6ad960f77", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294", Pod:"coredns-7d764666f9-jd58h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac6987d824c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.496 [INFO][5943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.496 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" iface="eth0" netns="" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.496 [INFO][5943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.496 [INFO][5943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.518 [INFO][5951] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.518 [INFO][5951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.518 [INFO][5951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.525 [WARNING][5951] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.525 [INFO][5951] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.527 [INFO][5951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.532418 containerd[1454]: 2026-03-07 02:14:22.529 [INFO][5943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.532418 containerd[1454]: time="2026-03-07T02:14:22.532339112Z" level=info msg="TearDown network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" successfully" Mar 7 02:14:22.532418 containerd[1454]: time="2026-03-07T02:14:22.532360652Z" level=info msg="StopPodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" returns successfully" Mar 7 02:14:22.532920 containerd[1454]: time="2026-03-07T02:14:22.532898336Z" level=info msg="RemovePodSandbox for \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\"" Mar 7 02:14:22.532950 containerd[1454]: time="2026-03-07T02:14:22.532920968Z" level=info msg="Forcibly stopping sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\"" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.568 [WARNING][5971] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jd58h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"55af8801-3665-4537-a222-72d6ad960f77", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91deee248fb327006af7994e5fa7b482fef1387212aab522d8a30eb695062294", Pod:"coredns-7d764666f9-jd58h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac6987d824c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.569 [INFO][5971] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.569 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" iface="eth0" netns="" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.569 [INFO][5971] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.569 [INFO][5971] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.593 [INFO][5979] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.593 [INFO][5979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.593 [INFO][5979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.599 [WARNING][5979] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.599 [INFO][5979] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" HandleID="k8s-pod-network.454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Workload="localhost-k8s-coredns--7d764666f9--jd58h-eth0" Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.602 [INFO][5979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.608113 containerd[1454]: 2026-03-07 02:14:22.605 [INFO][5971] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e" Mar 7 02:14:22.608710 containerd[1454]: time="2026-03-07T02:14:22.608146733Z" level=info msg="TearDown network for sandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" successfully" Mar 7 02:14:22.612707 containerd[1454]: time="2026-03-07T02:14:22.612670387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:22.612799 containerd[1454]: time="2026-03-07T02:14:22.612729617Z" level=info msg="RemovePodSandbox \"454f3888271946fa7ae7a03dfa3bdc31505eb629f40fcd565cf231362349cb6e\" returns successfully" Mar 7 02:14:22.616743 containerd[1454]: time="2026-03-07T02:14:22.616695836Z" level=info msg="StopPodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\"" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.653 [WARNING][5996] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1", Pod:"goldmane-9f7667bb8-9fkdf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50309f4a100", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.653 [INFO][5996] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.653 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" iface="eth0" netns="" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.653 [INFO][5996] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.653 [INFO][5996] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.680 [INFO][6004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.680 [INFO][6004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.680 [INFO][6004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.685 [WARNING][6004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.685 [INFO][6004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.687 [INFO][6004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.692441 containerd[1454]: 2026-03-07 02:14:22.689 [INFO][5996] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.693244 containerd[1454]: time="2026-03-07T02:14:22.693182920Z" level=info msg="TearDown network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" successfully" Mar 7 02:14:22.693244 containerd[1454]: time="2026-03-07T02:14:22.693229197Z" level=info msg="StopPodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" returns successfully" Mar 7 02:14:22.693857 containerd[1454]: time="2026-03-07T02:14:22.693726896Z" level=info msg="RemovePodSandbox for \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\"" Mar 7 02:14:22.693857 containerd[1454]: time="2026-03-07T02:14:22.693763745Z" level=info msg="Forcibly stopping sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\"" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.732 [WARNING][6022] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0557d5f8-dfa6-4ac0-b2cb-c8ef999934ab", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c6e33f1961d50269186474d3f0f80768908178c5cdcdddf55ca0d979cad64d1", Pod:"goldmane-9f7667bb8-9fkdf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50309f4a100", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.732 [INFO][6022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.733 [INFO][6022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" iface="eth0" netns="" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.733 [INFO][6022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.733 [INFO][6022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.758 [INFO][6030] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.758 [INFO][6030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.758 [INFO][6030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.763 [WARNING][6030] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.763 [INFO][6030] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" HandleID="k8s-pod-network.1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Workload="localhost-k8s-goldmane--9f7667bb8--9fkdf-eth0" Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.765 [INFO][6030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.771029 containerd[1454]: 2026-03-07 02:14:22.768 [INFO][6022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439" Mar 7 02:14:22.771451 containerd[1454]: time="2026-03-07T02:14:22.771060734Z" level=info msg="TearDown network for sandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" successfully" Mar 7 02:14:22.775689 containerd[1454]: time="2026-03-07T02:14:22.775634984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:22.775730 containerd[1454]: time="2026-03-07T02:14:22.775699956Z" level=info msg="RemovePodSandbox \"1f33888462932a5df413f41023c71207271357c4ac55b4461a3fa4716acbb439\" returns successfully" Mar 7 02:14:22.776247 containerd[1454]: time="2026-03-07T02:14:22.776189324Z" level=info msg="StopPodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\"" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.813 [WARNING][6048] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"400859bd-9f1f-404b-b164-62fa2410895c", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d", Pod:"calico-apiserver-67fc86959f-bhm6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0dbd50f9ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.813 [INFO][6048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.813 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" iface="eth0" netns="" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.813 [INFO][6048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.813 [INFO][6048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.837 [INFO][6056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.837 [INFO][6056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.837 [INFO][6056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.843 [WARNING][6056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.843 [INFO][6056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.844 [INFO][6056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.849956 containerd[1454]: 2026-03-07 02:14:22.847 [INFO][6048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.849956 containerd[1454]: time="2026-03-07T02:14:22.849904307Z" level=info msg="TearDown network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" successfully" Mar 7 02:14:22.849956 containerd[1454]: time="2026-03-07T02:14:22.849926279Z" level=info msg="StopPodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" returns successfully" Mar 7 02:14:22.850674 containerd[1454]: time="2026-03-07T02:14:22.850331195Z" level=info msg="RemovePodSandbox for \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\"" Mar 7 02:14:22.850674 containerd[1454]: time="2026-03-07T02:14:22.850351662Z" level=info msg="Forcibly stopping sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\"" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.887 [WARNING][6072] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0", GenerateName:"calico-apiserver-67fc86959f-", Namespace:"calico-system", SelfLink:"", UID:"400859bd-9f1f-404b-b164-62fa2410895c", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc86959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"099c5158c2cd8f062c109a43c4f30dd3fc8a29b4cfc412698da4d10d9c2ff56d", Pod:"calico-apiserver-67fc86959f-bhm6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0dbd50f9ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.888 [INFO][6072] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.888 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" iface="eth0" netns="" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.888 [INFO][6072] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.888 [INFO][6072] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.912 [INFO][6081] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.912 [INFO][6081] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.912 [INFO][6081] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.918 [WARNING][6081] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.918 [INFO][6081] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" HandleID="k8s-pod-network.01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Workload="localhost-k8s-calico--apiserver--67fc86959f--bhm6g-eth0" Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.920 [INFO][6081] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:14:22.925460 containerd[1454]: 2026-03-07 02:14:22.922 [INFO][6072] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd" Mar 7 02:14:22.925843 containerd[1454]: time="2026-03-07T02:14:22.925484228Z" level=info msg="TearDown network for sandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" successfully" Mar 7 02:14:22.929882 containerd[1454]: time="2026-03-07T02:14:22.929829218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:14:22.929978 containerd[1454]: time="2026-03-07T02:14:22.929892196Z" level=info msg="RemovePodSandbox \"01d3a634417b27ac74f709f05284b5c00f34d8095aeeb2dd5470259e0ac600fd\" returns successfully" Mar 7 02:14:25.844448 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:38626.service - OpenSSH per-connection server daemon (10.0.0.1:38626). Mar 7 02:14:25.891858 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 38626 ssh2: RSA SHA256:PMV8zUN0RSQDHqn4RLDS0yFca0MNBoAsbREJIVkDJ/E Mar 7 02:14:25.893637 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:14:25.898565 systemd-logind[1442]: New session 19 of user core. Mar 7 02:14:25.909695 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:14:26.102468 sshd[6131]: pam_unix(sshd:session): session closed for user core Mar 7 02:14:26.106682 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:38626.service: Deactivated successfully. Mar 7 02:14:26.108709 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:14:26.109626 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:14:26.110891 systemd-logind[1442]: Removed session 19.