Mar 4 01:01:31.061235 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 01:01:31.061547 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:01:31.061564 kernel: BIOS-provided physical RAM map: Mar 4 01:01:31.061573 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 4 01:01:31.061583 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 4 01:01:31.061592 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 4 01:01:31.061603 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 4 01:01:31.061613 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 4 01:01:31.061622 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 4 01:01:31.061631 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 4 01:01:31.061644 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 4 01:01:31.061654 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 4 01:01:31.061664 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 4 01:01:31.061673 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 4 01:01:31.061685 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 4 01:01:31.061695 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 4 01:01:31.061806 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 4 01:01:31.061818 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 4 01:01:31.061828 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 4 01:01:31.061838 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 01:01:31.061848 kernel: NX (Execute Disable) protection: active Mar 4 01:01:31.061858 kernel: APIC: Static calls initialized Mar 4 01:01:31.061868 kernel: efi: EFI v2.7 by EDK II Mar 4 01:01:31.061878 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 4 01:01:31.061888 kernel: SMBIOS 2.8 present. Mar 4 01:01:31.061898 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 4 01:01:31.061908 kernel: Hypervisor detected: KVM Mar 4 01:01:31.061922 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 01:01:31.061933 kernel: kvm-clock: using sched offset of 15560825063 cycles Mar 4 01:01:31.061943 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 01:01:31.061954 kernel: tsc: Detected 2445.424 MHz processor Mar 4 01:01:31.061965 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 01:01:31.061975 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 01:01:31.061986 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 4 01:01:31.061996 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 4 01:01:31.062007 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 01:01:31.062021 kernel: Using GB pages for direct mapping Mar 4 01:01:31.062031 kernel: Secure boot disabled Mar 4 01:01:31.062041 kernel: ACPI: Early table checksum verification disabled Mar 4 01:01:31.062052 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 4 01:01:31.062067 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 4 01:01:31.062338 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:01:31.062352 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:01:31.062367 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 4 01:01:31.062378 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:01:31.062389 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:01:31.062400 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:01:31.062411 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:01:31.062422 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 4 01:01:31.062433 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 4 01:01:31.062447 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 4 01:01:31.062458 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 4 01:01:31.062469 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 4 01:01:31.062480 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 4 01:01:31.062491 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 4 01:01:31.062501 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 4 01:01:31.062512 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 4 01:01:31.062523 kernel: No NUMA configuration found Mar 4 01:01:31.062534 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 4 01:01:31.062548 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 4 01:01:31.062560 kernel: Zone ranges: Mar 4 01:01:31.062571 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 01:01:31.062581 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 4 01:01:31.062592 kernel: Normal empty Mar 4 01:01:31.062602 kernel: Movable zone start for each node Mar 4 01:01:31.062613 kernel: Early memory node ranges Mar 4 01:01:31.062624 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 4 01:01:31.062635 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 4 01:01:31.062646 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 4 01:01:31.062661 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 4 01:01:31.062672 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 4 01:01:31.062682 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 4 01:01:31.062693 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 4 01:01:31.062704 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:01:31.062813 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 4 01:01:31.062825 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 4 01:01:31.062835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:01:31.062847 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 4 01:01:31.062861 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 4 01:01:31.062872 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 4 01:01:31.062883 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 01:01:31.062894 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 01:01:31.062905 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 01:01:31.062916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 01:01:31.062927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 01:01:31.062937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 01:01:31.062948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 01:01:31.062963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 01:01:31.062974 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 01:01:31.062985 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 01:01:31.062996 kernel: TSC deadline timer available Mar 4 01:01:31.063007 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 01:01:31.063018 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 01:01:31.063029 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 01:01:31.063039 kernel: kvm-guest: setup PV sched yield Mar 4 01:01:31.063050 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 4 01:01:31.063065 kernel: Booting paravirtualized kernel on KVM Mar 4 01:01:31.063249 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 01:01:31.063262 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 01:01:31.063273 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 01:01:31.063285 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 01:01:31.063295 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 01:01:31.063306 kernel: kvm-guest: PV spinlocks enabled Mar 4 01:01:31.063318 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 01:01:31.063330 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:01:31.063346 kernel: random: crng init done Mar 4 01:01:31.063357 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:01:31.063368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:01:31.063379 kernel: Fallback order for Node 0: 0 Mar 4 01:01:31.063389 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 4 01:01:31.063401 kernel: Policy zone: DMA32 Mar 4 01:01:31.063411 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:01:31.063422 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 4 01:01:31.063438 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 01:01:31.063448 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 01:01:31.063459 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 01:01:31.063470 kernel: Dynamic Preempt: voluntary Mar 4 01:01:31.063481 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:01:31.063505 kernel: rcu: RCU event tracing is enabled. Mar 4 01:01:31.063519 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 01:01:31.063530 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:01:31.063542 kernel: Rude variant of Tasks RCU enabled. Mar 4 01:01:31.063553 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:01:31.063563 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:01:31.063575 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 01:01:31.063589 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 01:01:31.063601 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:01:31.063612 kernel: Console: colour dummy device 80x25 Mar 4 01:01:31.063623 kernel: printk: console [ttyS0] enabled Mar 4 01:01:31.063634 kernel: ACPI: Core revision 20230628 Mar 4 01:01:31.063647 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 01:01:31.063658 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 01:01:31.063669 kernel: x2apic enabled Mar 4 01:01:31.063680 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 01:01:31.063691 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 01:01:31.063702 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 01:01:31.063807 kernel: kvm-guest: setup PV IPIs Mar 4 01:01:31.063818 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 01:01:31.063828 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 01:01:31.063842 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 4 01:01:31.063853 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 01:01:31.063864 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 01:01:31.063876 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 01:01:31.063887 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 01:01:31.063898 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 01:01:31.063909 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 01:01:31.063920 kernel: Speculative Store Bypass: Vulnerable Mar 4 01:01:31.063932 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 01:01:31.063948 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 01:01:31.063959 kernel: active return thunk: srso_alias_return_thunk Mar 4 01:01:31.063970 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 01:01:31.063982 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 01:01:31.063993 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 01:01:31.064005 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 01:01:31.064016 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 01:01:31.064028 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 01:01:31.064042 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 01:01:31.064053 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 01:01:31.064065 kernel: Freeing SMP alternatives memory: 32K Mar 4 01:01:31.064223 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:01:31.064308 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:01:31.064321 kernel: landlock: Up and running. Mar 4 01:01:31.064333 kernel: SELinux: Initializing. Mar 4 01:01:31.064344 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:01:31.064356 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:01:31.064371 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 01:01:31.064383 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:01:31.064395 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:01:31.064406 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:01:31.064418 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 01:01:31.064429 kernel: signal: max sigframe size: 1776 Mar 4 01:01:31.064441 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:01:31.064453 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:01:31.064465 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 01:01:31.064480 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:01:31.064491 kernel: smpboot: x86: Booting SMP configuration: Mar 4 01:01:31.064580 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 01:01:31.064593 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 01:01:31.064604 kernel: smpboot: Max logical packages: 1 Mar 4 01:01:31.064615 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 4 01:01:31.064627 kernel: devtmpfs: initialized Mar 4 01:01:31.064638 kernel: x86/mm: Memory block size: 128MB Mar 4 01:01:31.064650 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 4 01:01:31.064665 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 4 01:01:31.064677 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 4 01:01:31.064688 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 4 01:01:31.064699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 4 01:01:31.064801 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:01:31.064813 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 01:01:31.064825 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:01:31.064836 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:01:31.064848 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:01:31.064863 kernel: audit: type=2000 audit(1772586080.579:1): state=initialized audit_enabled=0 res=1 Mar 4 01:01:31.064874 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:01:31.064885 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 01:01:31.064896 kernel: cpuidle: using governor menu Mar 4 01:01:31.064907 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:01:31.064919 kernel: dca service started, version 1.12.1 Mar 4 01:01:31.064930 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 01:01:31.064942 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 01:01:31.064954 kernel: PCI: Using configuration type 1 for base access Mar 4 01:01:31.064968 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 01:01:31.064979 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:01:31.064991 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:01:31.065002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:01:31.065013 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:01:31.065025 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:01:31.065036 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:01:31.065046 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:01:31.065057 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:01:31.065215 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 01:01:31.065228 kernel: ACPI: Interpreter enabled Mar 4 01:01:31.065239 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 01:01:31.065250 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 01:01:31.065261 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 01:01:31.065272 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 01:01:31.065283 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 01:01:31.065294 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 01:01:31.066316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 01:01:31.066592 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 01:01:31.066864 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 01:01:31.066884 kernel: PCI host bridge to bus 0000:00 Mar 4 01:01:31.067485 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 01:01:31.067650 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 01:01:31.067892 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 01:01:31.068053 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 01:01:31.068440 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 01:01:31.068831 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 4 01:01:31.069540 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 01:01:31.069970 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 01:01:31.070520 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 01:01:31.070705 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 4 01:01:31.071215 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 4 01:01:31.071501 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 4 01:01:31.071684 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 4 01:01:31.072058 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 01:01:31.072642 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 01:01:31.072986 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 4 01:01:31.073318 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 4 01:01:31.073497 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 4 01:01:31.074021 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 01:01:31.074365 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 4 01:01:31.074547 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 4 01:01:31.074824 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 4 01:01:31.075303 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 01:01:31.075493 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 4 01:01:31.075673 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 4 01:01:31.075949 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 4 01:01:31.076287 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 4 01:01:31.077228 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 01:01:31.077415 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 01:01:31.077689 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 01:01:31.077973 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 4 01:01:31.078361 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 4 01:01:31.078857 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 01:01:31.079039 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 4 01:01:31.079057 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 01:01:31.079221 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 01:01:31.079237 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 01:01:31.079255 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 01:01:31.079267 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 01:01:31.079277 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 01:01:31.079287 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 01:01:31.079296 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 01:01:31.079305 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 01:01:31.079315 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 01:01:31.079328 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 01:01:31.079339 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 01:01:31.079356 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 01:01:31.079365 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 01:01:31.079375 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 01:01:31.079384 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 01:01:31.079393 kernel: iommu: Default domain type: Translated Mar 4 01:01:31.079406 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 01:01:31.079418 kernel: efivars: Registered efivars operations Mar 4 01:01:31.079430 kernel: PCI: Using ACPI for IRQ routing Mar 4 01:01:31.079440 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 01:01:31.079455 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 4 01:01:31.079465 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 4 01:01:31.079474 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 4 01:01:31.079487 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 4 01:01:31.079670 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 01:01:31.079955 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 01:01:31.080284 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 01:01:31.080302 kernel: vgaarb: loaded Mar 4 01:01:31.080314 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 01:01:31.080333 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 01:01:31.080342 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 01:01:31.080352 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:01:31.080362 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:01:31.080372 kernel: pnp: PnP ACPI init Mar 4 01:01:31.080987 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 01:01:31.081006 kernel: pnp: PnP ACPI: found 6 devices Mar 4 01:01:31.081016 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 01:01:31.081034 kernel: NET: Registered PF_INET protocol family Mar 4 01:01:31.081045 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:01:31.081055 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:01:31.081064 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:01:31.081226 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:01:31.081241 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:01:31.081251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:01:31.081261 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:01:31.081270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:01:31.081285 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:01:31.081297 kernel: NET: Registered PF_XDP protocol family Mar 4 01:01:31.081473 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 4 01:01:31.081804 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 4 01:01:31.081971 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 01:01:31.082339 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 01:01:31.082498 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 01:01:31.082662 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 01:01:31.082908 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 01:01:31.083065 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 4 01:01:31.083233 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:01:31.083322 kernel: Initialise system trusted keyrings Mar 4 01:01:31.083335 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:01:31.083345 kernel: Key type asymmetric registered Mar 4 01:01:31.083356 kernel: Asymmetric key parser 'x509' registered Mar 4 01:01:31.083366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 01:01:31.083382 kernel: io scheduler mq-deadline registered Mar 4 01:01:31.083393 kernel: io scheduler kyber registered Mar 4 01:01:31.083404 kernel: io scheduler bfq registered Mar 4 01:01:31.083415 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 01:01:31.083427 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 01:01:31.083438 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 01:01:31.083449 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 01:01:31.083460 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:01:31.083471 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 01:01:31.083485 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 01:01:31.083498 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 01:01:31.083508 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 01:01:31.083985 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 01:01:31.084004 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 01:01:31.084314 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 01:01:31.084480 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T01:01:29 UTC (1772586089) Mar 4 01:01:31.084641 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 01:01:31.084661 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 01:01:31.084672 kernel: efifb: probing for efifb Mar 4 01:01:31.084684 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 4 01:01:31.084697 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 4 01:01:31.084798 kernel: efifb: scrolling: redraw Mar 4 01:01:31.084810 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 4 01:01:31.084820 kernel: Console: switching to colour frame buffer device 100x37 Mar 4 01:01:31.084829 kernel: fb0: EFI VGA frame buffer device Mar 4 01:01:31.084842 kernel: pstore: Using crash dump compression: deflate Mar 4 01:01:31.084858 kernel: pstore: Registered efi_pstore as persistent store backend Mar 4 01:01:31.084868 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:01:31.084877 kernel: Segment Routing with IPv6 Mar 4 01:01:31.084886 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:01:31.084899 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:01:31.084909 kernel: Key type dns_resolver registered Mar 4 01:01:31.084918 kernel: IPI shorthand broadcast: enabled Mar 4 01:01:31.084952 kernel: sched_clock: Marking stable (9596054884, 534180636)->(10749569896, -619334376) Mar 4 01:01:31.084966 kernel: registered taskstats version 1 Mar 4 01:01:31.084979 kernel: Loading compiled-in X.509 certificates Mar 4 01:01:31.084990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 01:01:31.085003 kernel: Key type .fscrypt registered Mar 4 01:01:31.085012 kernel: Key type fscrypt-provisioning registered Mar 4 01:01:31.085022 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:01:31.085032 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:01:31.085045 kernel: ima: No architecture policies found Mar 4 01:01:31.085055 kernel: clk: Disabling unused clocks Mar 4 01:01:31.085065 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 01:01:31.085221 kernel: Write protecting the kernel read-only data: 36864k Mar 4 01:01:31.085232 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 01:01:31.085241 kernel: Run /init as init process Mar 4 01:01:31.085251 kernel: with arguments: Mar 4 01:01:31.085261 kernel: /init Mar 4 01:01:31.085272 kernel: with environment: Mar 4 01:01:31.085283 kernel: HOME=/ Mar 4 01:01:31.085294 kernel: TERM=linux Mar 4 01:01:31.085308 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:01:31.085328 systemd[1]: Detected virtualization kvm. Mar 4 01:01:31.085340 systemd[1]: Detected architecture x86-64. Mar 4 01:01:31.085352 systemd[1]: Running in initrd. Mar 4 01:01:31.085364 systemd[1]: No hostname configured, using default hostname. Mar 4 01:01:31.085374 systemd[1]: Hostname set to . Mar 4 01:01:31.085385 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:01:31.085395 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:01:31.085413 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:01:31.085423 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:01:31.085435 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:01:31.085445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:01:31.085458 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:01:31.085476 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:01:31.085489 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:01:31.085499 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:01:31.085513 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:01:31.085523 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:01:31.085534 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:01:31.085547 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:01:31.085560 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:01:31.085572 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:01:31.085583 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:01:31.085595 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:01:31.085609 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:01:31.085620 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:01:31.085631 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:01:31.085641 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:01:31.085657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:01:31.085669 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:01:31.085682 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:01:31.085695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:01:31.085792 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:01:31.085804 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:01:31.085815 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:01:31.085826 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:01:31.085868 systemd-journald[194]: Collecting audit messages is disabled. Mar 4 01:01:31.085899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:01:31.085912 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:01:31.085924 systemd-journald[194]: Journal started Mar 4 01:01:31.085953 systemd-journald[194]: Runtime Journal (/run/log/journal/c7c7dc65a00249d2a2635a8055774b41) is 6.0M, max 48.3M, 42.2M free. Mar 4 01:01:31.110892 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:01:31.118583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:01:31.118604 systemd-modules-load[195]: Inserted module 'overlay' Mar 4 01:01:31.127296 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:01:31.144947 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:01:31.191348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:01:31.224934 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:01:31.260670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:01:31.274324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:01:31.345535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:01:31.361264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:01:31.374531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:31.387313 kernel: Bridge firewalling registered Mar 4 01:01:31.378517 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 4 01:01:31.397814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:01:31.440290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:01:31.451393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:01:31.502470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:01:31.521497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:01:31.544316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:01:31.571643 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:01:31.618631 systemd-resolved[230]: Positive Trust Anchors: Mar 4 01:01:31.618994 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:01:31.645274 dracut-cmdline[233]: dracut-dracut-053 Mar 4 01:01:31.645274 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:01:31.619045 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:01:31.627616 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 4 01:01:31.632347 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:01:31.655819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:01:31.827351 kernel: SCSI subsystem initialized Mar 4 01:01:31.856351 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:01:31.888279 kernel: iscsi: registered transport (tcp) Mar 4 01:01:31.936503 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:01:31.936664 kernel: QLogic iSCSI HBA Driver Mar 4 01:01:32.038369 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:01:32.064487 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:01:32.134267 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:01:32.134353 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:01:32.143394 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:01:32.223464 kernel: raid6: avx2x4 gen() 20047 MB/s Mar 4 01:01:32.245381 kernel: raid6: avx2x2 gen() 19505 MB/s Mar 4 01:01:32.273587 kernel: raid6: avx2x1 gen() 9997 MB/s Mar 4 01:01:32.273677 kernel: raid6: using algorithm avx2x4 gen() 20047 MB/s Mar 4 01:01:32.300598 kernel: raid6: .... xor() 4534 MB/s, rmw enabled Mar 4 01:01:32.300688 kernel: raid6: using avx2x2 recovery algorithm Mar 4 01:01:32.345416 kernel: xor: automatically using best checksumming function avx Mar 4 01:01:32.789411 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:01:32.819242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:01:32.849544 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:01:32.875063 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 4 01:01:32.887029 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:01:32.899325 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:01:32.944537 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 4 01:01:33.029364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:01:33.066703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:01:33.224582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:01:33.250886 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:01:33.296484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:01:33.308001 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:01:33.321680 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:01:33.337018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:01:33.383645 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:01:33.398529 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:01:33.398686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:01:33.427857 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:01:33.447874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:01:33.448270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:33.493818 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:01:33.501561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:01:33.552664 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:01:33.589800 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 01:01:33.590022 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 01:01:33.605887 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 01:01:33.626890 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 01:01:33.626947 kernel: GPT:9289727 != 19775487 Mar 4 01:01:33.634461 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 01:01:33.634496 kernel: GPT:9289727 != 19775487 Mar 4 01:01:33.644913 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 01:01:33.644958 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:01:33.653902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:01:33.655533 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:33.723467 kernel: libata version 3.00 loaded. Mar 4 01:01:33.714989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:01:33.774865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:33.803435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:01:33.827590 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (461) Mar 4 01:01:33.850260 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 01:01:33.850317 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (483) Mar 4 01:01:33.863231 kernel: AES CTR mode by8 optimization enabled Mar 4 01:01:33.889511 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 01:01:33.889989 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 01:01:33.874507 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 01:01:33.894320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 01:01:33.922883 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 01:01:33.923389 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 01:01:33.925793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:01:33.942557 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 01:01:33.964006 kernel: scsi host0: ahci Mar 4 01:01:33.964516 kernel: scsi host1: ahci Mar 4 01:01:33.964834 kernel: scsi host2: ahci Mar 4 01:01:33.965063 kernel: scsi host3: ahci Mar 4 01:01:33.977291 kernel: scsi host4: ahci Mar 4 01:01:33.977501 kernel: scsi host5: ahci Mar 4 01:01:33.992039 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 4 01:01:34.000658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 01:01:34.052018 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 4 01:01:34.052047 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 4 01:01:34.052061 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 4 01:01:34.052241 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 4 01:01:34.052261 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 4 01:01:34.087960 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:01:34.103557 disk-uuid[572]: Primary Header is updated. Mar 4 01:01:34.103557 disk-uuid[572]: Secondary Entries is updated. Mar 4 01:01:34.103557 disk-uuid[572]: Secondary Header is updated. Mar 4 01:01:34.145848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:01:34.141395 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:01:34.355353 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 01:01:34.367182 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 01:01:34.367252 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 01:01:34.383410 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 01:01:34.390383 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 01:01:34.401486 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 01:01:34.401577 kernel: ata3.00: applying bridge limits Mar 4 01:01:34.410350 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 01:01:34.416401 kernel: ata3.00: configured for UDMA/100 Mar 4 01:01:34.428524 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 01:01:34.499883 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 01:01:34.500665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:01:34.522598 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 01:01:35.146207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:01:35.154041 disk-uuid[573]: The operation has completed successfully. Mar 4 01:01:35.227574 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:01:35.228065 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:01:35.269961 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:01:35.293251 sh[600]: Success Mar 4 01:01:35.328435 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 01:01:35.428601 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:01:35.454362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:01:35.477929 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:01:35.540579 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 01:01:35.540621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:01:35.540639 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:01:35.540657 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:01:35.546537 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:01:35.579956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:01:35.595435 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:01:35.620495 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:01:35.646344 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:01:35.701877 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:01:35.701919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:01:35.701939 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:01:35.728213 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:01:35.756024 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:01:35.772614 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:01:35.793482 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:01:35.818815 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:01:35.982034 ignition[700]: Ignition 2.19.0 Mar 4 01:01:35.982278 ignition[700]: Stage: fetch-offline Mar 4 01:01:35.982339 ignition[700]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:01:35.982355 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:01:35.982487 ignition[700]: parsed url from cmdline: "" Mar 4 01:01:35.982493 ignition[700]: no config URL provided Mar 4 01:01:35.982501 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:01:35.982513 ignition[700]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:01:35.982547 ignition[700]: op(1): [started] loading QEMU firmware config module Mar 4 01:01:35.982554 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 01:01:36.047339 ignition[700]: op(1): [finished] loading QEMU firmware config module Mar 4 01:01:36.067461 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:01:36.105558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:01:36.159000 systemd-networkd[788]: lo: Link UP Mar 4 01:01:36.159066 systemd-networkd[788]: lo: Gained carrier Mar 4 01:01:36.165318 systemd-networkd[788]: Enumeration completed Mar 4 01:01:36.167298 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:01:36.176983 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:01:36.176989 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:01:36.180672 systemd-networkd[788]: eth0: Link UP Mar 4 01:01:36.180677 systemd-networkd[788]: eth0: Gained carrier Mar 4 01:01:36.180688 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:01:36.186354 systemd[1]: Reached target network.target - Network. Mar 4 01:01:36.288279 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:01:36.977381 ignition[700]: parsing config with SHA512: 43636861224cb5d47792da8b579e7ec37760b5008e08d20e0f0dee8f204d991bf622bc5bf3eef99d08d3e1d547435e49987f8617b18c79f8896d24dceface338 Mar 4 01:01:36.994068 unknown[700]: fetched base config from "system" Mar 4 01:01:36.994309 unknown[700]: fetched user config from "qemu" Mar 4 01:01:36.994864 ignition[700]: fetch-offline: fetch-offline passed Mar 4 01:01:36.994949 ignition[700]: Ignition finished successfully Mar 4 01:01:37.031518 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:01:37.044603 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 01:01:37.075923 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:01:37.123034 ignition[792]: Ignition 2.19.0 Mar 4 01:01:37.123323 ignition[792]: Stage: kargs Mar 4 01:01:37.123507 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:01:37.123527 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:01:37.152307 ignition[792]: kargs: kargs passed Mar 4 01:01:37.152448 ignition[792]: Ignition finished successfully Mar 4 01:01:37.169566 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:01:37.204863 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:01:37.271368 ignition[800]: Ignition 2.19.0 Mar 4 01:01:37.271447 ignition[800]: Stage: disks Mar 4 01:01:37.271616 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:01:37.271629 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:01:37.273394 ignition[800]: disks: disks passed Mar 4 01:01:37.311040 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:01:37.273458 ignition[800]: Ignition finished successfully Mar 4 01:01:37.325968 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:01:37.347541 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:01:37.360391 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:01:37.391308 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:01:37.407348 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:01:37.438671 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:01:37.487022 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 01:01:37.504837 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:01:37.519454 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:01:37.628715 systemd-networkd[788]: eth0: Gained IPv6LL Mar 4 01:01:38.081354 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 01:01:38.083850 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:01:38.110483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:01:38.136955 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:01:38.149379 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:01:38.210982 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 4 01:01:38.211027 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:01:38.211058 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:01:38.211224 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:01:38.211029 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 01:01:38.254247 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:01:38.211251 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:01:38.211295 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:01:38.256353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:01:38.270246 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:01:38.316391 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:01:38.487021 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:01:38.517344 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:01:38.545703 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:01:38.560371 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:01:39.018927 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:01:39.050673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:01:39.065482 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:01:39.109829 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:01:39.113828 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:01:39.174966 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:01:39.236051 ignition[931]: INFO : Ignition 2.19.0 Mar 4 01:01:39.236051 ignition[931]: INFO : Stage: mount Mar 4 01:01:39.250020 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:01:39.250020 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:01:39.271999 ignition[931]: INFO : mount: mount passed Mar 4 01:01:39.279555 ignition[931]: INFO : Ignition finished successfully Mar 4 01:01:39.293313 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:01:39.326607 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:01:39.378062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:01:39.423840 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Mar 4 01:01:39.447068 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:01:39.447296 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:01:39.447318 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:01:39.504018 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:01:39.513019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:01:39.622869 ignition[962]: INFO : Ignition 2.19.0 Mar 4 01:01:39.622869 ignition[962]: INFO : Stage: files Mar 4 01:01:39.659022 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:01:39.659022 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:01:39.659022 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:01:39.708393 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:01:39.708393 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:01:39.708393 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:01:39.754936 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:01:39.754936 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:01:39.716695 unknown[962]: wrote ssh authorized keys file for user: core Mar 4 01:01:39.800252 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:01:39.800252 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 01:01:39.948418 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 01:01:40.096584 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:01:40.096584 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:01:40.137010 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 4 01:01:40.482897 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 01:01:41.271578 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:01:41.271578 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 4 01:01:41.305995 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 01:01:41.439520 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:01:41.439520 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:01:41.439520 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 01:01:41.439520 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:01:41.439520 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:01:41.439520 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:01:41.439520 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:01:41.439520 ignition[962]: INFO : files: files passed Mar 4 01:01:41.439520 ignition[962]: INFO : Ignition finished successfully Mar 4 01:01:41.384469 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:01:41.439866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:01:41.479928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:01:41.503373 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:01:41.669996 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 01:01:41.503524 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:01:41.738671 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:01:41.738671 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:01:41.527265 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:01:41.807349 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:01:41.550700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:01:41.611986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:01:41.683343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:01:41.683531 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:01:41.696586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:01:41.706477 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:01:41.716646 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:01:41.718874 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:01:41.760851 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:01:41.783385 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:01:41.821309 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:01:41.834287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:01:41.846323 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:01:41.855400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:01:41.855650 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:01:41.878316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:01:41.900819 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:01:41.909610 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:01:41.930870 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:01:41.942514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:01:41.953610 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:01:41.967472 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:01:41.988502 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:01:42.009354 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:01:42.031486 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:01:42.056267 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:01:42.056551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:01:42.080717 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:01:42.096603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:01:42.117423 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:01:42.117633 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:01:42.137595 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:01:42.530962 ignition[1013]: INFO : Ignition 2.19.0 Mar 4 01:01:42.530962 ignition[1013]: INFO : Stage: umount Mar 4 01:01:42.530962 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:01:42.530962 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:01:42.137897 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:01:42.621962 ignition[1013]: INFO : umount: umount passed Mar 4 01:01:42.621962 ignition[1013]: INFO : Ignition finished successfully Mar 4 01:01:42.161707 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:01:42.162068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:01:42.176506 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:01:42.195515 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:01:42.199385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:01:42.229642 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:01:42.251423 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:01:42.273489 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:01:42.273846 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:01:42.296642 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:01:42.296954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:01:42.313567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:01:42.313841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:01:42.338986 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:01:42.339313 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:01:42.384541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:01:42.425493 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:01:42.429473 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:01:42.429881 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:01:42.469563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:01:42.482962 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:01:42.545908 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:01:42.549712 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:01:42.564055 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:01:42.565496 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:01:42.646289 systemd[1]: Stopped target network.target - Network. Mar 4 01:01:42.665247 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:01:42.669468 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:01:42.707269 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:01:42.708062 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:01:42.735527 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:01:42.742011 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:01:42.803681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:01:42.805407 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:01:42.828277 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:01:42.863482 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:01:42.933474 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:01:42.933692 systemd-networkd[788]: eth0: DHCPv6 lease lost Mar 4 01:01:42.934331 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:01:42.997054 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:01:42.997925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:01:43.046272 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:01:43.050958 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:01:43.090454 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:01:43.090568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:01:43.313460 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:01:43.362003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:01:43.362361 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:01:43.365663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:01:43.365971 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:01:43.402049 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:01:43.403448 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:01:43.427222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:01:43.469448 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:01:43.486574 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:01:43.487326 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:01:43.518945 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:01:43.520387 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:01:43.584403 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:01:43.584864 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:01:43.600584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:01:43.600868 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:01:43.621020 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:01:43.621948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:01:43.655976 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:01:43.656342 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:01:43.694373 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:01:43.694529 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:01:43.717990 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:01:43.718368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:01:43.769533 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:01:43.786722 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:01:43.787006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:01:43.818382 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:01:43.819400 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:01:43.844275 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:01:43.844420 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:01:43.863326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:01:43.863473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:43.932438 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:01:43.932822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:01:44.258986 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:01:44.259553 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:01:44.268915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:01:44.340434 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:01:44.399068 systemd[1]: Switching root. Mar 4 01:01:44.515638 systemd-journald[194]: Journal stopped Mar 4 01:01:49.237399 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 4 01:01:49.237506 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:01:49.237702 kernel: SELinux: policy capability open_perms=1 Mar 4 01:01:49.237722 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:01:49.237739 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:01:49.237851 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:01:49.237954 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:01:49.237974 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:01:49.238230 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:01:49.238251 kernel: audit: type=1403 audit(1772586105.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:01:49.238354 systemd[1]: Successfully loaded SELinux policy in 207.262ms. Mar 4 01:01:49.238397 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.914ms. Mar 4 01:01:49.238417 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:01:49.238437 systemd[1]: Detected virtualization kvm. Mar 4 01:01:49.238455 systemd[1]: Detected architecture x86-64. Mar 4 01:01:49.238474 systemd[1]: Detected first boot. Mar 4 01:01:49.238572 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:01:49.238591 zram_generator::config[1061]: No configuration found. Mar 4 01:01:49.238691 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:01:49.238710 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 01:01:49.238729 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 01:01:49.238836 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 01:01:49.238860 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:01:49.238972 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:01:49.238992 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:01:49.239013 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:01:49.239030 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:01:49.239056 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:01:49.239238 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:01:49.239264 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:01:49.239280 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:01:49.239300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:01:49.239318 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:01:49.239347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:01:49.239365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:01:49.239389 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:01:49.239409 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 01:01:49.239427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:01:49.239446 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 01:01:49.239463 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 01:01:49.239482 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 01:01:49.239500 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:01:49.239519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:01:49.239541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:01:49.239561 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:01:49.239663 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:01:49.239682 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:01:49.239702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:01:49.239719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:01:49.239739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:01:49.239855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:01:49.239875 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:01:49.239899 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:01:49.239920 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:01:49.239937 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:01:49.239956 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:01:49.239973 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:01:49.239993 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:01:49.240010 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:01:49.240031 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:01:49.240049 systemd[1]: Reached target machines.target - Containers. Mar 4 01:01:49.240309 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:01:49.240333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:01:49.240426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:01:49.240445 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:01:49.240470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:01:49.240574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:01:49.240592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:01:49.240609 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:01:49.240631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:01:49.240649 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:01:49.240673 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 01:01:49.240690 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 01:01:49.240708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 01:01:49.240726 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 01:01:49.240832 kernel: fuse: init (API version 7.39) Mar 4 01:01:49.240852 kernel: ACPI: bus type drm_connector registered Mar 4 01:01:49.240869 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:01:49.240891 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:01:49.240909 kernel: loop: module loaded Mar 4 01:01:49.240926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:01:49.240983 systemd-journald[1145]: Collecting audit messages is disabled. Mar 4 01:01:49.241015 systemd-journald[1145]: Journal started Mar 4 01:01:49.241050 systemd-journald[1145]: Runtime Journal (/run/log/journal/c7c7dc65a00249d2a2635a8055774b41) is 6.0M, max 48.3M, 42.2M free. Mar 4 01:01:47.316400 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:01:47.392653 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 01:01:47.397429 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 01:01:47.400299 systemd[1]: systemd-journald.service: Consumed 5.018s CPU time. Mar 4 01:01:49.256303 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:01:49.300728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:01:49.334966 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 01:01:49.335051 systemd[1]: Stopped verity-setup.service. Mar 4 01:01:49.363515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:01:49.379984 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:01:49.389533 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:01:49.400941 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:01:49.413057 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:01:49.423934 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:01:49.436044 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:01:49.449420 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:01:49.461481 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:01:49.477549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:01:49.491018 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:01:49.491693 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:01:49.503737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:01:49.505252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:01:49.516508 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:01:49.518323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:01:49.531363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:01:49.532558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:01:49.546261 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:01:49.546687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:01:49.557057 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:01:49.557551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:01:49.569401 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:01:49.584412 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:01:49.598973 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:01:49.637549 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:01:49.658871 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:01:49.673647 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:01:49.685679 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:01:49.685955 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:01:49.697717 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:01:49.712874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:01:49.728064 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:01:49.739455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:01:49.753601 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:01:49.775381 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:01:49.786963 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:01:49.792581 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:01:49.804417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:01:49.807861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:01:49.840535 systemd-journald[1145]: Time spent on flushing to /var/log/journal/c7c7dc65a00249d2a2635a8055774b41 is 310.057ms for 983 entries. Mar 4 01:01:49.840535 systemd-journald[1145]: System Journal (/var/log/journal/c7c7dc65a00249d2a2635a8055774b41) is 8.0M, max 195.6M, 187.6M free. Mar 4 01:01:50.368446 systemd-journald[1145]: Received client request to flush runtime journal. Mar 4 01:01:50.368837 kernel: loop0: detected capacity change from 0 to 140768 Mar 4 01:01:50.368987 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:01:49.822892 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:01:49.857442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:01:49.877431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:01:49.893612 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:01:49.905739 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:01:49.948854 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:01:49.968569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:01:49.985839 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:01:50.020432 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:01:50.049470 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:01:50.306996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:01:50.331732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:01:50.345399 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:01:50.382422 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:01:50.413696 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 01:01:50.435376 kernel: loop1: detected capacity change from 0 to 217752 Mar 4 01:01:50.455441 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 4 01:01:50.455533 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 4 01:01:50.502933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:01:50.533244 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:01:50.601970 kernel: loop2: detected capacity change from 0 to 142488 Mar 4 01:01:51.084481 kernel: loop3: detected capacity change from 0 to 140768 Mar 4 01:01:51.153310 kernel: loop4: detected capacity change from 0 to 217752 Mar 4 01:01:51.670896 kernel: hrtimer: interrupt took 3105279 ns Mar 4 01:01:51.815635 kernel: loop5: detected capacity change from 0 to 142488 Mar 4 01:01:51.865919 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:01:51.870580 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 01:01:51.875664 (sd-merge)[1199]: Merged extensions into '/usr'. Mar 4 01:01:52.467702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:01:52.505705 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:01:52.514232 systemd[1]: Reloading... Mar 4 01:01:52.709882 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 4 01:01:52.709911 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 4 01:01:52.819484 zram_generator::config[1227]: No configuration found. Mar 4 01:01:53.121398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:01:53.216361 systemd[1]: Reloading finished in 687 ms. Mar 4 01:01:53.289893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:01:53.315002 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:01:53.375398 systemd[1]: Starting ensure-sysext.service... Mar 4 01:01:53.396499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:01:53.418932 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:01:53.419032 systemd[1]: Reloading... Mar 4 01:01:53.443537 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:01:53.475965 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:01:53.476669 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:01:53.478963 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:01:53.479852 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 4 01:01:53.480051 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 4 01:01:53.510648 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:01:53.511297 systemd-tmpfiles[1268]: Skipping /boot Mar 4 01:01:53.563697 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:01:53.563884 systemd-tmpfiles[1268]: Skipping /boot Mar 4 01:01:53.632464 zram_generator::config[1296]: No configuration found. Mar 4 01:01:53.934555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:01:54.014444 systemd[1]: Reloading finished in 591 ms. Mar 4 01:01:54.048522 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:01:54.064684 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:01:54.096406 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:01:54.168316 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:01:54.185546 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:01:54.205419 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:01:54.243407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:01:54.262433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:01:54.294401 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:01:54.315576 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:01:54.355859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:01:54.356510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:01:54.380649 augenrules[1358]: No rules Mar 4 01:01:54.370927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:01:54.376069 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Mar 4 01:01:54.402461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:01:54.429751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:01:54.444605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:01:54.449558 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:01:54.480245 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:01:54.491017 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:01:54.498990 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:01:54.518626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:01:54.544438 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:01:54.563693 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:01:54.578855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:01:54.579285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:01:54.599033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:01:54.599457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:01:54.622899 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:01:54.623509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:01:54.641685 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:01:54.657505 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:01:54.724009 systemd[1]: Finished ensure-sysext.service. Mar 4 01:01:54.767365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1374) Mar 4 01:01:54.774369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:01:54.774688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:01:54.784892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:01:54.798910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:01:54.813411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:01:54.850428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:01:54.858241 systemd-resolved[1346]: Positive Trust Anchors: Mar 4 01:01:54.858257 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:01:54.858299 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:01:54.862040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:01:54.875534 systemd-resolved[1346]: Defaulting to hostname 'linux'. Mar 4 01:01:54.889281 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:01:54.925744 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 01:01:54.938450 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:01:54.938505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:01:54.939456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:01:54.951739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:01:54.952478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:01:54.973465 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:01:54.974879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:01:54.995548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:01:54.999043 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:01:55.047273 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 01:01:55.028656 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:01:55.029341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:01:55.070402 kernel: ACPI: button: Power Button [PWRF] Mar 4 01:01:55.111624 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 01:01:55.135962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:01:55.162321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:01:55.189493 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:01:55.202918 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:01:55.204403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:01:55.252241 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 4 01:01:55.295926 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 01:01:55.347612 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 01:01:55.465400 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 01:01:55.516946 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 01:01:55.618501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:01:55.818649 systemd-networkd[1407]: lo: Link UP Mar 4 01:01:55.818743 systemd-networkd[1407]: lo: Gained carrier Mar 4 01:01:55.823564 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 01:01:55.835553 systemd-networkd[1407]: Enumeration completed Mar 4 01:01:55.843585 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:01:55.853548 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:01:55.853936 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:01:55.856928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:01:55.870722 systemd-networkd[1407]: eth0: Link UP Mar 4 01:01:55.871258 systemd-networkd[1407]: eth0: Gained carrier Mar 4 01:01:55.871451 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:01:55.918382 systemd[1]: Reached target network.target - Network. Mar 4 01:01:55.933968 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:01:55.974452 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:01:55.995750 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 4 01:01:57.314802 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 01:01:57.314874 systemd-timesyncd[1409]: Initial clock synchronization to Wed 2026-03-04 01:01:57.311132 UTC. Mar 4 01:01:57.314943 systemd-resolved[1346]: Clock change detected. Flushing caches. Mar 4 01:01:57.326710 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:01:57.379531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:01:57.380438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:57.422144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:01:57.794871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:01:58.522918 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:01:58.724171 kernel: kvm_amd: TSC scaling supported Mar 4 01:01:58.725776 kernel: kvm_amd: Nested Virtualization enabled Mar 4 01:01:58.740008 kernel: kvm_amd: Nested Paging enabled Mar 4 01:01:58.745362 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 01:01:58.771477 kernel: kvm_amd: PMU virtualization is disabled Mar 4 01:01:58.778499 systemd-networkd[1407]: eth0: Gained IPv6LL Mar 4 01:01:58.830954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:01:58.861419 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:02:00.051905 kernel: EDAC MC: Ver: 3.0.0 Mar 4 01:02:00.158501 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:02:00.195432 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:02:00.316233 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:02:00.423162 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:02:00.445435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:02:00.464459 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:02:00.479542 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:02:00.496917 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:02:00.511213 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:02:00.525229 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:02:00.541169 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:02:00.558386 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:02:00.558748 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:02:00.568337 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:02:00.584440 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:02:00.601140 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:02:00.628219 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:02:00.642531 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:02:00.658422 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:02:00.671102 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:02:00.681915 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:02:00.694056 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:02:00.694190 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:02:00.714111 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:02:00.735089 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 01:02:00.761174 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:02:00.764370 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:02:00.775466 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:02:00.803848 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:02:00.815528 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:02:00.819918 jq[1449]: false Mar 4 01:02:00.821518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:02:00.850014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:02:00.857197 extend-filesystems[1450]: Found loop3 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found loop4 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found loop5 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found sr0 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda1 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda2 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda3 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found usr Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda4 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda6 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda7 Mar 4 01:02:00.878510 extend-filesystems[1450]: Found vda9 Mar 4 01:02:00.878510 extend-filesystems[1450]: Checking size of /dev/vda9 Mar 4 01:02:01.232187 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 01:02:01.232233 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 01:02:01.232248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1381) Mar 4 01:02:00.869945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:02:00.972978 dbus-daemon[1448]: [system] SELinux support is enabled Mar 4 01:02:01.241818 extend-filesystems[1450]: Resized partition /dev/vda9 Mar 4 01:02:00.897877 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:02:01.261426 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Mar 4 01:02:01.261426 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 01:02:01.261426 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 01:02:01.261426 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 01:02:00.912838 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:02:01.325505 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Mar 4 01:02:00.940371 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:02:00.979477 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:02:00.992395 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:02:00.993715 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:02:00.997720 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:02:01.336425 update_engine[1466]: I20260304 01:02:01.300726 1466 main.cc:92] Flatcar Update Engine starting Mar 4 01:02:01.336425 update_engine[1466]: I20260304 01:02:01.308904 1466 update_check_scheduler.cc:74] Next update check in 11m0s Mar 4 01:02:01.017363 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:02:01.338878 jq[1467]: true Mar 4 01:02:01.039127 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:02:01.090092 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:02:01.162187 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:02:01.162486 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:02:01.169864 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:02:01.170556 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:02:01.234444 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:02:01.242881 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:02:01.244849 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:02:01.343533 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:02:01.344151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:02:01.352869 jq[1484]: true Mar 4 01:02:01.359055 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 01:02:01.359520 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 01:02:01.362049 systemd-logind[1464]: New seat seat0. Mar 4 01:02:01.375546 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:02:01.416233 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 01:02:01.417114 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 01:02:01.437382 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:02:01.509023 tar[1480]: linux-amd64/LICENSE Mar 4 01:02:01.525051 tar[1480]: linux-amd64/helm Mar 4 01:02:01.555527 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:02:01.607015 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:02:01.607534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:02:01.607890 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:02:01.647127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:02:01.647778 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:02:01.701859 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:02:01.771547 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:02:01.815081 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:02:01.821736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:02:01.841083 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 01:02:03.122231 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:02:03.166842 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:02:03.236398 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:02:03.433171 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:02:03.450126 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:02:04.010426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:02:04.870511 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:02:04.915539 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:02:04.935795 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 01:02:04.948219 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:02:06.866365 containerd[1494]: time="2026-03-04T01:02:06.865183465Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:02:06.923195 containerd[1494]: time="2026-03-04T01:02:06.921130136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.929505 containerd[1494]: time="2026-03-04T01:02:06.929124185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:02:06.929505 containerd[1494]: time="2026-03-04T01:02:06.929248768Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:02:06.929505 containerd[1494]: time="2026-03-04T01:02:06.929368832Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:02:06.930155 containerd[1494]: time="2026-03-04T01:02:06.929839020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:02:06.930155 containerd[1494]: time="2026-03-04T01:02:06.930016081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.930155 containerd[1494]: time="2026-03-04T01:02:06.930111329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:02:06.930155 containerd[1494]: time="2026-03-04T01:02:06.930129252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.930722 containerd[1494]: time="2026-03-04T01:02:06.930468105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.931131474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.931235928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.931254313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.931480876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.932178198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.932440749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.932460325Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:02:06.932907 containerd[1494]: time="2026-03-04T01:02:06.932760836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:02:06.933366 containerd[1494]: time="2026-03-04T01:02:06.932947885Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:02:06.948966 tar[1480]: linux-amd64/README.md Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.966887868Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.967762857Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.967800608Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.967825395Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.967845452Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.968526113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.969415885Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:02:06.970066 containerd[1494]: time="2026-03-04T01:02:06.969779564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:02:06.970502 containerd[1494]: time="2026-03-04T01:02:06.970464302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:02:06.970502 containerd[1494]: time="2026-03-04T01:02:06.970493898Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970518904Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970539563Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970729578Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970760275Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970782316Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970802915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970821008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970838481Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970950801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970977491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.970996236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.971014289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.971034326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.971247 containerd[1494]: time="2026-03-04T01:02:06.971052921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971071606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971091643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971112413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971139573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971157106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971173466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971191410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971219913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971481041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971507580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.972208 containerd[1494]: time="2026-03-04T01:02:06.971525865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.973245085Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.973448695Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.973470696Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.973489311Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.974018479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.974531357Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.974555351Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:02:06.975025 containerd[1494]: time="2026-03-04T01:02:06.974739676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:02:06.993046 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:06.990421319Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:06.990542845Z" level=info msg="Connect containerd service" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:06.990859587Z" level=info msg="using legacy CRI server" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:06.990880857Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:06.991819950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:06.998032762Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.001211037Z" level=info msg="Start subscribing containerd event" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.001548998Z" level=info msg="Start recovering state" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.002045415Z" level=info msg="Start event monitor" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.002154940Z" level=info msg="Start snapshots syncer" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.002175708Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.002197910Z" level=info msg="Start streaming server" Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.005873371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:02:07.006365 containerd[1494]: time="2026-03-04T01:02:07.006367824Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:02:07.007502 containerd[1494]: time="2026-03-04T01:02:07.006449927Z" level=info msg="containerd successfully booted in 0.147224s" Mar 4 01:02:07.008783 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:02:07.451950 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:02:07.507140 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:51184.service - OpenSSH per-connection server daemon (10.0.0.1:51184). Mar 4 01:02:07.787908 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 51184 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:08.844475 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:08.909742 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:02:08.953441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:02:09.108897 systemd-logind[1464]: New session 1 of user core. Mar 4 01:02:09.675832 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:02:09.826068 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:02:10.375037 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:02:11.834022 systemd[1561]: Queued start job for default target default.target. Mar 4 01:02:11.856157 systemd[1561]: Created slice app.slice - User Application Slice. Mar 4 01:02:11.856206 systemd[1561]: Reached target paths.target - Paths. Mar 4 01:02:11.856229 systemd[1561]: Reached target timers.target - Timers. Mar 4 01:02:11.860913 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:02:11.963943 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:02:11.965732 systemd[1561]: Reached target sockets.target - Sockets. Mar 4 01:02:11.965758 systemd[1561]: Reached target basic.target - Basic System. Mar 4 01:02:11.965916 systemd[1561]: Reached target default.target - Main User Target. Mar 4 01:02:11.965975 systemd[1561]: Startup finished in 1.517s. Mar 4 01:02:11.982186 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:02:12.045029 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:02:12.362367 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:47956.service - OpenSSH per-connection server daemon (10.0.0.1:47956). Mar 4 01:02:12.620812 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 47956 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:12.633511 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:12.665255 systemd-logind[1464]: New session 2 of user core. Mar 4 01:02:12.677188 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:02:13.122255 sshd[1576]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:13.135148 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:47956.service: Deactivated successfully. Mar 4 01:02:13.141866 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:02:13.146822 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:02:13.159941 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:47962.service - OpenSSH per-connection server daemon (10.0.0.1:47962). Mar 4 01:02:13.165007 systemd-logind[1464]: Removed session 2. Mar 4 01:02:13.270785 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 47962 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:13.277227 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:13.308247 systemd-logind[1464]: New session 3 of user core. Mar 4 01:02:13.314887 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:02:13.862718 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:13.877953 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:47962.service: Deactivated successfully. Mar 4 01:02:13.885087 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:02:13.904425 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:02:13.909806 systemd-logind[1464]: Removed session 3. Mar 4 01:02:16.517170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:02:16.518523 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:02:16.521091 systemd[1]: Startup finished in 9.932s (kernel) + 14.858s (initrd) + 30.276s (userspace) = 55.067s. Mar 4 01:02:16.562945 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:02:23.426471 kubelet[1594]: E0304 01:02:23.425054 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:02:23.440065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:02:23.440868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:02:23.444795 systemd[1]: kubelet.service: Consumed 18.724s CPU time. Mar 4 01:02:23.958852 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:55956.service - OpenSSH per-connection server daemon (10.0.0.1:55956). Mar 4 01:02:24.132499 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 55956 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:24.151241 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:24.215933 systemd-logind[1464]: New session 4 of user core. Mar 4 01:02:24.237104 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:02:24.543424 sshd[1603]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:24.572816 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:55956.service: Deactivated successfully. Mar 4 01:02:24.604813 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:02:24.609008 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:02:24.625938 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:55958.service - OpenSSH per-connection server daemon (10.0.0.1:55958). Mar 4 01:02:24.636203 systemd-logind[1464]: Removed session 4. Mar 4 01:02:24.803901 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 55958 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:24.811512 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:24.827881 systemd-logind[1464]: New session 5 of user core. Mar 4 01:02:24.838763 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:02:24.919038 sshd[1610]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:24.941434 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:55958.service: Deactivated successfully. Mar 4 01:02:24.948979 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:02:24.952881 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:02:24.967188 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:55964.service - OpenSSH per-connection server daemon (10.0.0.1:55964). Mar 4 01:02:24.973177 systemd-logind[1464]: Removed session 5. Mar 4 01:02:25.049002 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 55964 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:25.054059 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:25.071404 systemd-logind[1464]: New session 6 of user core. Mar 4 01:02:25.081972 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:02:25.183069 sshd[1617]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:25.203034 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:55964.service: Deactivated successfully. Mar 4 01:02:25.207500 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:02:25.213241 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:02:25.223990 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:55970.service - OpenSSH per-connection server daemon (10.0.0.1:55970). Mar 4 01:02:25.227086 systemd-logind[1464]: Removed session 6. Mar 4 01:02:25.285285 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 55970 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:25.288442 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:25.310730 systemd-logind[1464]: New session 7 of user core. Mar 4 01:02:25.318496 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:02:25.435115 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:02:25.436065 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:02:25.474436 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 4 01:02:25.480896 sshd[1624]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:25.521004 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:55970.service: Deactivated successfully. Mar 4 01:02:25.526425 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:02:25.532207 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:02:25.546815 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:55986.service - OpenSSH per-connection server daemon (10.0.0.1:55986). Mar 4 01:02:25.549552 systemd-logind[1464]: Removed session 7. Mar 4 01:02:25.638484 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 55986 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:25.641131 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:25.674452 systemd-logind[1464]: New session 8 of user core. Mar 4 01:02:25.687079 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:02:25.779515 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:02:25.780948 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:02:25.803127 sudo[1636]: pam_unix(sudo:session): session closed for user root Mar 4 01:02:25.818920 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:02:25.820976 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:02:25.886842 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:02:25.904956 auditctl[1639]: No rules Mar 4 01:02:25.908492 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:02:25.910843 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:02:25.931809 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:02:26.052755 augenrules[1657]: No rules Mar 4 01:02:26.054230 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:02:26.058248 sudo[1635]: pam_unix(sudo:session): session closed for user root Mar 4 01:02:26.062234 sshd[1632]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:26.089249 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:55986.service: Deactivated successfully. Mar 4 01:02:26.098144 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:02:26.105792 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:02:26.130232 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:56000.service - OpenSSH per-connection server daemon (10.0.0.1:56000). Mar 4 01:02:26.132906 systemd-logind[1464]: Removed session 8. Mar 4 01:02:26.190013 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 56000 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:02:26.193426 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:02:26.214816 systemd-logind[1464]: New session 9 of user core. Mar 4 01:02:26.229152 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:02:26.313883 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:02:26.314912 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:02:27.417883 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:02:27.422800 (dockerd)[1688]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:02:28.282499 dockerd[1688]: time="2026-03-04T01:02:28.281176145Z" level=info msg="Starting up" Mar 4 01:02:28.648893 dockerd[1688]: time="2026-03-04T01:02:28.647296352Z" level=info msg="Loading containers: start." Mar 4 01:02:29.207924 kernel: Initializing XFRM netlink socket Mar 4 01:02:29.711256 systemd-networkd[1407]: docker0: Link UP Mar 4 01:02:29.812199 dockerd[1688]: time="2026-03-04T01:02:29.811222295Z" level=info msg="Loading containers: done." Mar 4 01:02:30.377533 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1165390812-merged.mount: Deactivated successfully. Mar 4 01:02:30.430906 dockerd[1688]: time="2026-03-04T01:02:30.429133629Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:02:30.433454 dockerd[1688]: time="2026-03-04T01:02:30.432144677Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:02:30.433454 dockerd[1688]: time="2026-03-04T01:02:30.433039819Z" level=info msg="Daemon has completed initialization" Mar 4 01:02:30.711171 dockerd[1688]: time="2026-03-04T01:02:30.709450253Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:02:30.711399 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:02:33.531956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:02:33.612484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:02:37.026840 containerd[1494]: time="2026-03-04T01:02:37.024801480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 4 01:02:38.164357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:02:38.344478 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:02:38.859457 kubelet[1844]: E0304 01:02:38.858961 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:02:38.872163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:02:38.872819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:02:38.873872 systemd[1]: kubelet.service: Consumed 4.472s CPU time. Mar 4 01:02:39.383981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278843351.mount: Deactivated successfully. Mar 4 01:02:43.464928 containerd[1494]: time="2026-03-04T01:02:43.464690466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:43.466605 containerd[1494]: time="2026-03-04T01:02:43.466408674Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 4 01:02:43.468227 containerd[1494]: time="2026-03-04T01:02:43.468119164Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:43.473539 containerd[1494]: time="2026-03-04T01:02:43.473504119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:43.476058 containerd[1494]: time="2026-03-04T01:02:43.475916204Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 6.450822986s" Mar 4 01:02:43.476058 containerd[1494]: time="2026-03-04T01:02:43.475987113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 4 01:02:43.478856 containerd[1494]: time="2026-03-04T01:02:43.478628276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 4 01:02:46.359887 containerd[1494]: time="2026-03-04T01:02:46.359388268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:46.363355 containerd[1494]: time="2026-03-04T01:02:46.363267472Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 4 01:02:46.367630 containerd[1494]: time="2026-03-04T01:02:46.366408747Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:46.373008 containerd[1494]: time="2026-03-04T01:02:46.372817294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:46.375491 containerd[1494]: time="2026-03-04T01:02:46.375393337Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.896696928s" Mar 4 01:02:46.375491 containerd[1494]: time="2026-03-04T01:02:46.375476969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 4 01:02:46.379006 containerd[1494]: time="2026-03-04T01:02:46.378931613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 4 01:02:46.455914 update_engine[1466]: I20260304 01:02:46.455747 1466 update_attempter.cc:509] Updating boot flags... Mar 4 01:02:46.581231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1921) Mar 4 01:02:49.033093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:02:49.050798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:02:49.703420 containerd[1494]: time="2026-03-04T01:02:49.703277362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:49.711937 containerd[1494]: time="2026-03-04T01:02:49.711857110Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 4 01:02:49.727804 containerd[1494]: time="2026-03-04T01:02:49.727748105Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:49.734403 containerd[1494]: time="2026-03-04T01:02:49.734257906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:49.736041 containerd[1494]: time="2026-03-04T01:02:49.735943150Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 3.356936351s" Mar 4 01:02:49.736117 containerd[1494]: time="2026-03-04T01:02:49.736042582Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 4 01:02:49.740181 containerd[1494]: time="2026-03-04T01:02:49.740107274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 4 01:02:49.755421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:02:49.776272 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:02:49.921178 kubelet[1939]: E0304 01:02:49.921022 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:02:49.926627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:02:49.926952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:02:52.545449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834087890.mount: Deactivated successfully. Mar 4 01:02:53.676631 containerd[1494]: time="2026-03-04T01:02:53.676263459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:53.680535 containerd[1494]: time="2026-03-04T01:02:53.679706282Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 4 01:02:53.684197 containerd[1494]: time="2026-03-04T01:02:53.684129205Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:53.699430 containerd[1494]: time="2026-03-04T01:02:53.698979520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:53.701157 containerd[1494]: time="2026-03-04T01:02:53.700942999Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 3.960771137s" Mar 4 01:02:53.701157 containerd[1494]: time="2026-03-04T01:02:53.701055507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 4 01:02:53.703707 containerd[1494]: time="2026-03-04T01:02:53.703653628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 4 01:02:54.291090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098046337.mount: Deactivated successfully. Mar 4 01:02:58.567407 containerd[1494]: time="2026-03-04T01:02:58.566900001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:58.570043 containerd[1494]: time="2026-03-04T01:02:58.567624618Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 4 01:02:58.570043 containerd[1494]: time="2026-03-04T01:02:58.569929747Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:58.583958 containerd[1494]: time="2026-03-04T01:02:58.583269944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:58.584075 containerd[1494]: time="2026-03-04T01:02:58.584008414Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 4.880301819s" Mar 4 01:02:58.584075 containerd[1494]: time="2026-03-04T01:02:58.584050070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 4 01:02:58.593223 containerd[1494]: time="2026-03-04T01:02:58.593013779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 01:02:59.301106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705356830.mount: Deactivated successfully. Mar 4 01:02:59.317823 containerd[1494]: time="2026-03-04T01:02:59.317695230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:59.319276 containerd[1494]: time="2026-03-04T01:02:59.319014714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 4 01:02:59.321507 containerd[1494]: time="2026-03-04T01:02:59.321317884Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:59.327254 containerd[1494]: time="2026-03-04T01:02:59.326958733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:59.328968 containerd[1494]: time="2026-03-04T01:02:59.328795131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 735.710522ms" Mar 4 01:02:59.329239 containerd[1494]: time="2026-03-04T01:02:59.328888273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 4 01:02:59.333254 containerd[1494]: time="2026-03-04T01:02:59.332862357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 4 01:03:00.024439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 01:03:00.039136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:03:00.052846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472577502.mount: Deactivated successfully. Mar 4 01:03:00.965643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:01.018507 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:03:01.359926 kubelet[2037]: E0304 01:03:01.358922 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:03:01.363223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:03:01.363535 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:03:01.364974 systemd[1]: kubelet.service: Consumed 1.099s CPU time. Mar 4 01:03:03.738889 containerd[1494]: time="2026-03-04T01:03:03.738678628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:03.741640 containerd[1494]: time="2026-03-04T01:03:03.741321798Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 4 01:03:03.745221 containerd[1494]: time="2026-03-04T01:03:03.742985647Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:03.748538 containerd[1494]: time="2026-03-04T01:03:03.748428339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:03.752401 containerd[1494]: time="2026-03-04T01:03:03.751714307Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 4.418802017s" Mar 4 01:03:03.752401 containerd[1494]: time="2026-03-04T01:03:03.751784987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 4 01:03:07.087788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:07.087982 systemd[1]: kubelet.service: Consumed 1.099s CPU time. Mar 4 01:03:07.100026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:03:07.151459 systemd[1]: Reloading requested from client PID 2125 ('systemctl') (unit session-9.scope)... Mar 4 01:03:07.151514 systemd[1]: Reloading... Mar 4 01:03:07.278746 zram_generator::config[2167]: No configuration found. Mar 4 01:03:07.490469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:03:07.637812 systemd[1]: Reloading finished in 485 ms. Mar 4 01:03:07.748533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:07.753764 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:03:07.764648 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:03:07.765152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:07.792107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:03:08.111351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:08.143204 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:03:08.389817 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:03:08.576877 kubelet[2213]: I0304 01:03:08.576393 2213 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 4 01:03:08.576877 kubelet[2213]: I0304 01:03:08.576687 2213 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:03:08.576877 kubelet[2213]: I0304 01:03:08.576791 2213 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:03:08.576877 kubelet[2213]: I0304 01:03:08.576804 2213 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:03:08.578546 kubelet[2213]: I0304 01:03:08.578467 2213 server.go:951] "Client rotation is on, will bootstrap in background" Mar 4 01:03:08.816482 kubelet[2213]: E0304 01:03:08.816071 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:03:08.820398 kubelet[2213]: I0304 01:03:08.818928 2213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:03:08.845728 kubelet[2213]: E0304 01:03:08.844996 2213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:03:08.845728 kubelet[2213]: I0304 01:03:08.845109 2213 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:03:08.904970 kubelet[2213]: I0304 01:03:08.904420 2213 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:03:08.908487 kubelet[2213]: I0304 01:03:08.908366 2213 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:03:08.909981 kubelet[2213]: I0304 01:03:08.908468 2213 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:03:08.910801 kubelet[2213]: I0304 01:03:08.910016 2213 topology_manager.go:143] "Creating topology manager with none policy" Mar 4 01:03:08.910801 kubelet[2213]: I0304 01:03:08.910035 2213 container_manager_linux.go:308] "Creating device plugin manager" Mar 4 01:03:08.910801 kubelet[2213]: I0304 01:03:08.910729 2213 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:03:08.931462 kubelet[2213]: I0304 01:03:08.930419 2213 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 4 01:03:08.931462 kubelet[2213]: I0304 01:03:08.931033 2213 kubelet.go:482] "Attempting to sync node with API server" Mar 4 01:03:08.931462 kubelet[2213]: I0304 01:03:08.931085 2213 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:03:08.935707 kubelet[2213]: I0304 01:03:08.932477 2213 kubelet.go:394] "Adding apiserver pod source" Mar 4 01:03:08.935707 kubelet[2213]: I0304 01:03:08.932541 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:03:08.959184 kubelet[2213]: I0304 01:03:08.959045 2213 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:03:08.990737 kubelet[2213]: I0304 01:03:08.990446 2213 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:03:08.990737 kubelet[2213]: I0304 01:03:08.990739 2213 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:03:08.996808 kubelet[2213]: W0304 01:03:08.994467 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:03:09.007010 kubelet[2213]: I0304 01:03:09.006914 2213 server.go:1257] "Started kubelet" Mar 4 01:03:09.011843 kubelet[2213]: I0304 01:03:09.011357 2213 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:03:09.011843 kubelet[2213]: I0304 01:03:09.011412 2213 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:03:09.012993 kubelet[2213]: I0304 01:03:09.012127 2213 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:03:09.016486 kubelet[2213]: I0304 01:03:09.015873 2213 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 4 01:03:09.016486 kubelet[2213]: I0304 01:03:09.016046 2213 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:03:09.029734 kubelet[2213]: I0304 01:03:09.026766 2213 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:03:09.031149 kubelet[2213]: I0304 01:03:09.031042 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:03:09.055533 kubelet[2213]: E0304 01:03:09.035655 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997dbf518425d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:03:09.006841301 +0000 UTC m=+0.846350742,LastTimestamp:2026-03-04 01:03:09.006841301 +0000 UTC m=+0.846350742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:03:09.057236 kubelet[2213]: E0304 01:03:09.057090 2213 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:03:09.057517 kubelet[2213]: I0304 01:03:09.057449 2213 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 4 01:03:09.150690 kubelet[2213]: I0304 01:03:09.149945 2213 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:03:09.151255 kubelet[2213]: I0304 01:03:09.150756 2213 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:03:09.152310 kubelet[2213]: E0304 01:03:09.151947 2213 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Mar 4 01:03:09.157683 kubelet[2213]: E0304 01:03:09.157535 2213 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:03:09.160471 kubelet[2213]: I0304 01:03:09.160377 2213 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:03:09.160706 kubelet[2213]: I0304 01:03:09.160677 2213 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:03:09.163908 kubelet[2213]: E0304 01:03:09.163737 2213 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:03:09.164894 kubelet[2213]: I0304 01:03:09.164819 2213 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:03:09.197397 kubelet[2213]: I0304 01:03:09.196708 2213 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:03:09.202875 kubelet[2213]: I0304 01:03:09.202793 2213 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:03:09.202988 kubelet[2213]: I0304 01:03:09.202977 2213 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 4 01:03:09.203546 kubelet[2213]: I0304 01:03:09.203363 2213 kubelet.go:2501] "Starting kubelet main sync loop" Mar 4 01:03:09.204042 kubelet[2213]: E0304 01:03:09.203733 2213 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:03:09.253780 kubelet[2213]: I0304 01:03:09.253504 2213 cpu_manager.go:225] "Starting" policy="none" Mar 4 01:03:09.254162 kubelet[2213]: I0304 01:03:09.253948 2213 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 4 01:03:09.254162 kubelet[2213]: I0304 01:03:09.254052 2213 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 4 01:03:09.258438 kubelet[2213]: E0304 01:03:09.258251 2213 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:03:09.270905 kubelet[2213]: I0304 01:03:09.270796 2213 policy_none.go:50] "Start" Mar 4 01:03:09.271031 kubelet[2213]: I0304 01:03:09.270926 2213 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:03:09.271031 kubelet[2213]: I0304 01:03:09.271015 2213 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:03:09.276600 kubelet[2213]: I0304 01:03:09.276502 2213 policy_none.go:44] "Start" Mar 4 01:03:09.527890 kubelet[2213]: E0304 01:03:09.525729 2213 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:03:09.527890 kubelet[2213]: E0304 01:03:09.525838 2213 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 01:03:09.529221 kubelet[2213]: E0304 01:03:09.528132 2213 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Mar 4 01:03:09.536232 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:03:09.620829 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:03:09.627065 kubelet[2213]: E0304 01:03:09.626821 2213 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:03:09.671037 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:03:09.677430 kubelet[2213]: E0304 01:03:09.677336 2213 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:03:09.678794 kubelet[2213]: I0304 01:03:09.678725 2213 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 4 01:03:09.679389 kubelet[2213]: I0304 01:03:09.679188 2213 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:03:09.717968 kubelet[2213]: I0304 01:03:09.703481 2213 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 4 01:03:09.722454 kubelet[2213]: E0304 01:03:09.722353 2213 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:03:09.722550 kubelet[2213]: E0304 01:03:09.722493 2213 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:03:09.812690 kubelet[2213]: I0304 01:03:09.809769 2213 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:09.814451 kubelet[2213]: E0304 01:03:09.814066 2213 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 4 01:03:09.816891 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 4 01:03:09.830418 kubelet[2213]: I0304 01:03:09.829928 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:09.830418 kubelet[2213]: I0304 01:03:09.830096 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:09.830418 kubelet[2213]: I0304 01:03:09.830129 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:09.830418 kubelet[2213]: I0304 01:03:09.830160 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:09.830418 kubelet[2213]: I0304 01:03:09.830181 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:09.831035 kubelet[2213]: I0304 01:03:09.830223 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:09.843976 kubelet[2213]: E0304 01:03:09.843127 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:09.850814 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 4 01:03:09.856074 kubelet[2213]: E0304 01:03:09.855885 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:09.878942 systemd[1]: Created slice kubepods-burstable-pod7cd35a68b65c2f608029bd2f2089b76c.slice - libcontainer container kubepods-burstable-pod7cd35a68b65c2f608029bd2f2089b76c.slice. Mar 4 01:03:09.896963 kubelet[2213]: E0304 01:03:09.896101 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:09.945959 kubelet[2213]: I0304 01:03:09.944244 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cd35a68b65c2f608029bd2f2089b76c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cd35a68b65c2f608029bd2f2089b76c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:09.945959 kubelet[2213]: I0304 01:03:09.944431 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cd35a68b65c2f608029bd2f2089b76c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7cd35a68b65c2f608029bd2f2089b76c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:09.945959 kubelet[2213]: I0304 01:03:09.944483 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cd35a68b65c2f608029bd2f2089b76c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cd35a68b65c2f608029bd2f2089b76c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:09.945959 kubelet[2213]: E0304 01:03:09.945198 2213 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Mar 4 01:03:10.021777 kubelet[2213]: I0304 01:03:10.020820 2213 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:10.021777 kubelet[2213]: E0304 01:03:10.021386 2213 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 4 01:03:10.225106 kubelet[2213]: E0304 01:03:10.221440 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:10.225953 kubelet[2213]: E0304 01:03:10.224992 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:10.226041 containerd[1494]: time="2026-03-04T01:03:10.225553249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7cd35a68b65c2f608029bd2f2089b76c,Namespace:kube-system,Attempt:0,}" Mar 4 01:03:10.226907 containerd[1494]: time="2026-03-04T01:03:10.226686187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 4 01:03:10.233881 kubelet[2213]: E0304 01:03:10.233797 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:10.234674 containerd[1494]: time="2026-03-04T01:03:10.234635344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 4 01:03:10.427240 kubelet[2213]: I0304 01:03:10.426111 2213 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:10.427240 kubelet[2213]: E0304 01:03:10.426731 2213 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 4 01:03:10.746732 kubelet[2213]: E0304 01:03:10.746239 2213 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Mar 4 01:03:10.928708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262683049.mount: Deactivated successfully. Mar 4 01:03:10.960020 containerd[1494]: time="2026-03-04T01:03:10.959147519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:03:10.971416 containerd[1494]: time="2026-03-04T01:03:10.970536072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:03:10.979457 containerd[1494]: time="2026-03-04T01:03:10.978360888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:03:11.006033 containerd[1494]: time="2026-03-04T01:03:10.994234626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:03:11.010895 containerd[1494]: time="2026-03-04T01:03:11.010188809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:03:11.016170 kubelet[2213]: E0304 01:03:11.015482 2213 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:03:11.017758 containerd[1494]: time="2026-03-04T01:03:11.016504294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:03:11.020845 containerd[1494]: time="2026-03-04T01:03:11.018527671Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:03:11.030450 containerd[1494]: time="2026-03-04T01:03:11.029962489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:03:11.032788 containerd[1494]: time="2026-03-04T01:03:11.032707198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 797.799169ms" Mar 4 01:03:11.105921 containerd[1494]: time="2026-03-04T01:03:11.103713512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 876.915579ms" Mar 4 01:03:11.113553 containerd[1494]: time="2026-03-04T01:03:11.110157733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 884.250646ms" Mar 4 01:03:11.280932 kubelet[2213]: I0304 01:03:11.278534 2213 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:11.310993 kubelet[2213]: E0304 01:03:11.309714 2213 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976803333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976909250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976937292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976755360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976838153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976862238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.977050087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976729574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976825352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:11.977544 containerd[1494]: time="2026-03-04T01:03:11.976988014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:11.979482 containerd[1494]: time="2026-03-04T01:03:11.977843013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:11.987556 containerd[1494]: time="2026-03-04T01:03:11.981319580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:12.266010 systemd[1]: Started cri-containerd-bec4a2d403172ed9ff76170298befc9de1f6952dddc244e43272b02fd5cbea53.scope - libcontainer container bec4a2d403172ed9ff76170298befc9de1f6952dddc244e43272b02fd5cbea53. Mar 4 01:03:12.349542 kubelet[2213]: E0304 01:03:12.349070 2213 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="3.2s" Mar 4 01:03:12.356918 systemd[1]: Started cri-containerd-7311e5015ebddd7f15cf59f9f024ef5192a9cf7884b9ffc23c59fe31b738f667.scope - libcontainer container 7311e5015ebddd7f15cf59f9f024ef5192a9cf7884b9ffc23c59fe31b738f667. Mar 4 01:03:12.663746 systemd[1]: Started cri-containerd-e50636680c7ec37541b1d03e55d72dc837c25a8496bc9530613d8f71fa2feb25.scope - libcontainer container e50636680c7ec37541b1d03e55d72dc837c25a8496bc9530613d8f71fa2feb25. Mar 4 01:03:12.910498 containerd[1494]: time="2026-03-04T01:03:12.910141643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"bec4a2d403172ed9ff76170298befc9de1f6952dddc244e43272b02fd5cbea53\"" Mar 4 01:03:12.918847 kubelet[2213]: I0304 01:03:12.918430 2213 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:12.919104 kubelet[2213]: E0304 01:03:12.918988 2213 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 4 01:03:12.924381 kubelet[2213]: E0304 01:03:12.921854 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:12.947706 containerd[1494]: time="2026-03-04T01:03:12.947097667Z" level=info msg="CreateContainer within sandbox \"bec4a2d403172ed9ff76170298befc9de1f6952dddc244e43272b02fd5cbea53\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:03:13.017157 containerd[1494]: time="2026-03-04T01:03:13.016923082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7311e5015ebddd7f15cf59f9f024ef5192a9cf7884b9ffc23c59fe31b738f667\"" Mar 4 01:03:13.024751 kubelet[2213]: E0304 01:03:13.024681 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:13.026314 containerd[1494]: time="2026-03-04T01:03:13.026073912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7cd35a68b65c2f608029bd2f2089b76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e50636680c7ec37541b1d03e55d72dc837c25a8496bc9530613d8f71fa2feb25\"" Mar 4 01:03:13.030898 kubelet[2213]: E0304 01:03:13.030708 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:13.038504 containerd[1494]: time="2026-03-04T01:03:13.038210267Z" level=info msg="CreateContainer within sandbox \"7311e5015ebddd7f15cf59f9f024ef5192a9cf7884b9ffc23c59fe31b738f667\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:03:13.043467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748431043.mount: Deactivated successfully. Mar 4 01:03:13.052941 containerd[1494]: time="2026-03-04T01:03:13.052889451Z" level=info msg="CreateContainer within sandbox \"e50636680c7ec37541b1d03e55d72dc837c25a8496bc9530613d8f71fa2feb25\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:03:13.079821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886462066.mount: Deactivated successfully. Mar 4 01:03:13.105809 containerd[1494]: time="2026-03-04T01:03:13.105275023Z" level=info msg="CreateContainer within sandbox \"bec4a2d403172ed9ff76170298befc9de1f6952dddc244e43272b02fd5cbea53\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c18210e1559b5774f06e9636c7d4fda1cfe0f72eeb693e5bcd4dc6df33ff4c2\"" Mar 4 01:03:13.109659 containerd[1494]: time="2026-03-04T01:03:13.107427425Z" level=info msg="StartContainer for \"5c18210e1559b5774f06e9636c7d4fda1cfe0f72eeb693e5bcd4dc6df33ff4c2\"" Mar 4 01:03:13.109659 containerd[1494]: time="2026-03-04T01:03:13.109453808Z" level=info msg="CreateContainer within sandbox \"7311e5015ebddd7f15cf59f9f024ef5192a9cf7884b9ffc23c59fe31b738f667\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a345e565f276cc1853da73a7f98947988eff1c94aefa4b390bca19a652c93de2\"" Mar 4 01:03:13.111849 containerd[1494]: time="2026-03-04T01:03:13.111820851Z" level=info msg="StartContainer for \"a345e565f276cc1853da73a7f98947988eff1c94aefa4b390bca19a652c93de2\"" Mar 4 01:03:13.410473 containerd[1494]: time="2026-03-04T01:03:13.409646600Z" level=info msg="CreateContainer within sandbox \"e50636680c7ec37541b1d03e55d72dc837c25a8496bc9530613d8f71fa2feb25\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a6d09e1d5910d63d2e0c848864faa9e7d213b9e09c294ec0f9d3b6ed943ec77\"" Mar 4 01:03:13.413426 containerd[1494]: time="2026-03-04T01:03:13.412169715Z" level=info msg="StartContainer for \"6a6d09e1d5910d63d2e0c848864faa9e7d213b9e09c294ec0f9d3b6ed943ec77\"" Mar 4 01:03:13.517836 systemd[1]: Started cri-containerd-a345e565f276cc1853da73a7f98947988eff1c94aefa4b390bca19a652c93de2.scope - libcontainer container a345e565f276cc1853da73a7f98947988eff1c94aefa4b390bca19a652c93de2. Mar 4 01:03:13.538525 systemd[1]: Started cri-containerd-5c18210e1559b5774f06e9636c7d4fda1cfe0f72eeb693e5bcd4dc6df33ff4c2.scope - libcontainer container 5c18210e1559b5774f06e9636c7d4fda1cfe0f72eeb693e5bcd4dc6df33ff4c2. Mar 4 01:03:13.762877 systemd[1]: Started cri-containerd-6a6d09e1d5910d63d2e0c848864faa9e7d213b9e09c294ec0f9d3b6ed943ec77.scope - libcontainer container 6a6d09e1d5910d63d2e0c848864faa9e7d213b9e09c294ec0f9d3b6ed943ec77. Mar 4 01:03:13.868183 containerd[1494]: time="2026-03-04T01:03:13.867904120Z" level=info msg="StartContainer for \"a345e565f276cc1853da73a7f98947988eff1c94aefa4b390bca19a652c93de2\" returns successfully" Mar 4 01:03:13.904826 containerd[1494]: time="2026-03-04T01:03:13.904773529Z" level=info msg="StartContainer for \"5c18210e1559b5774f06e9636c7d4fda1cfe0f72eeb693e5bcd4dc6df33ff4c2\" returns successfully" Mar 4 01:03:13.942661 containerd[1494]: time="2026-03-04T01:03:13.940958788Z" level=info msg="StartContainer for \"6a6d09e1d5910d63d2e0c848864faa9e7d213b9e09c294ec0f9d3b6ed943ec77\" returns successfully" Mar 4 01:03:14.505973 kubelet[2213]: E0304 01:03:14.505653 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:14.508010 kubelet[2213]: E0304 01:03:14.506139 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:14.514551 kubelet[2213]: E0304 01:03:14.512813 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:14.514551 kubelet[2213]: E0304 01:03:14.513063 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:14.525530 kubelet[2213]: E0304 01:03:14.525118 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:14.525530 kubelet[2213]: E0304 01:03:14.525363 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:15.547851 kubelet[2213]: E0304 01:03:15.546188 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:15.547851 kubelet[2213]: E0304 01:03:15.546831 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:15.547851 kubelet[2213]: E0304 01:03:15.547045 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:15.547851 kubelet[2213]: E0304 01:03:15.546862 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:15.551013 kubelet[2213]: E0304 01:03:15.549470 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:15.551013 kubelet[2213]: E0304 01:03:15.549718 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:16.141401 kubelet[2213]: I0304 01:03:16.140122 2213 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:16.555133 kubelet[2213]: E0304 01:03:16.555032 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:16.557270 kubelet[2213]: E0304 01:03:16.555792 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:16.557270 kubelet[2213]: E0304 01:03:16.556102 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:16.561965 kubelet[2213]: E0304 01:03:16.556200 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:18.715789 kubelet[2213]: E0304 01:03:18.715536 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:18.717093 kubelet[2213]: E0304 01:03:18.716919 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:19.910810 kubelet[2213]: E0304 01:03:19.904980 2213 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:03:20.511256 kubelet[2213]: E0304 01:03:20.510985 2213 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:03:20.512076 kubelet[2213]: E0304 01:03:20.512041 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:21.303969 kubelet[2213]: E0304 01:03:21.301428 2213 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:03:21.372847 kubelet[2213]: I0304 01:03:21.372461 2213 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 4 01:03:21.454296 kubelet[2213]: I0304 01:03:21.451824 2213 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:21.475855 kubelet[2213]: E0304 01:03:21.475362 2213 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:21.475855 kubelet[2213]: I0304 01:03:21.475447 2213 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:21.478899 kubelet[2213]: E0304 01:03:21.478247 2213 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:21.478899 kubelet[2213]: I0304 01:03:21.478280 2213 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:21.488078 kubelet[2213]: E0304 01:03:21.488037 2213 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:22.338682 kubelet[2213]: I0304 01:03:22.323264 2213 apiserver.go:52] "Watching apiserver" Mar 4 01:03:22.677961 kubelet[2213]: I0304 01:03:22.650983 2213 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:03:24.075354 kubelet[2213]: I0304 01:03:24.074849 2213 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:24.153857 kubelet[2213]: E0304 01:03:24.152905 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:24.417828 kubelet[2213]: E0304 01:03:24.415765 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:27.161365 systemd[1]: Reloading requested from client PID 2503 ('systemctl') (unit session-9.scope)... Mar 4 01:03:27.161425 systemd[1]: Reloading... Mar 4 01:03:27.549396 zram_generator::config[2541]: No configuration found. Mar 4 01:03:28.142494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:03:28.624286 systemd[1]: Reloading finished in 1462 ms. Mar 4 01:03:28.830920 kubelet[2213]: I0304 01:03:28.829379 2213 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:28.855922 kubelet[2213]: E0304 01:03:28.855531 2213 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:28.859970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:03:28.916916 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:03:28.917966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:28.918039 systemd[1]: kubelet.service: Consumed 7.155s CPU time, 127.7M memory peak, 0B memory swap peak. Mar 4 01:03:28.950355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:03:29.625282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:03:29.647858 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:03:30.011047 kubelet[2586]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:03:30.103730 kubelet[2586]: I0304 01:03:30.081291 2586 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 4 01:03:30.103730 kubelet[2586]: I0304 01:03:30.085499 2586 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:03:30.103730 kubelet[2586]: I0304 01:03:30.085540 2586 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:03:30.103730 kubelet[2586]: I0304 01:03:30.085547 2586 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:03:30.103730 kubelet[2586]: I0304 01:03:30.088844 2586 server.go:951] "Client rotation is on, will bootstrap in background" Mar 4 01:03:30.110404 kubelet[2586]: I0304 01:03:30.107787 2586 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:03:30.116504 kubelet[2586]: I0304 01:03:30.112539 2586 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:03:30.133679 kubelet[2586]: E0304 01:03:30.132127 2586 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:03:30.133679 kubelet[2586]: I0304 01:03:30.132250 2586 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:03:30.193835 kubelet[2586]: I0304 01:03:30.166693 2586 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:03:30.193835 kubelet[2586]: I0304 01:03:30.167686 2586 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:03:30.193835 kubelet[2586]: I0304 01:03:30.167776 2586 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:03:30.193835 kubelet[2586]: I0304 01:03:30.170503 2586 topology_manager.go:143] "Creating topology manager with none policy" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.170520 2586 container_manager_linux.go:308] "Creating device plugin manager" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.170683 2586 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.171284 2586 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.171487 2586 kubelet.go:482] "Attempting to sync node with API server" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.171510 2586 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.171537 2586 kubelet.go:394] "Adding apiserver pod source" Mar 4 01:03:30.207489 kubelet[2586]: I0304 01:03:30.171683 2586 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:03:30.242905 kubelet[2586]: I0304 01:03:30.242823 2586 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:03:30.251646 kubelet[2586]: I0304 01:03:30.249528 2586 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:03:30.251646 kubelet[2586]: I0304 01:03:30.250078 2586 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:03:30.263019 kubelet[2586]: I0304 01:03:30.262874 2586 server.go:1257] "Started kubelet" Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.264774 2586 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.264887 2586 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.265328 2586 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.265385 2586 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.267712 2586 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.268049 2586 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:03:30.271101 kubelet[2586]: I0304 01:03:30.268377 2586 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:03:30.307453 kubelet[2586]: I0304 01:03:30.307356 2586 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 4 01:03:30.307706 kubelet[2586]: E0304 01:03:30.307526 2586 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:03:30.308238 kubelet[2586]: I0304 01:03:30.307942 2586 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:03:30.308405 kubelet[2586]: I0304 01:03:30.308318 2586 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:03:30.311139 kubelet[2586]: I0304 01:03:30.311056 2586 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:03:30.311361 kubelet[2586]: I0304 01:03:30.311280 2586 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:03:30.318754 kubelet[2586]: I0304 01:03:30.318539 2586 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:03:30.439885 kubelet[2586]: I0304 01:03:30.437353 2586 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:03:30.545085 kubelet[2586]: I0304 01:03:30.540496 2586 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:03:30.545085 kubelet[2586]: I0304 01:03:30.540545 2586 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 4 01:03:30.545085 kubelet[2586]: I0304 01:03:30.541752 2586 kubelet.go:2501] "Starting kubelet main sync loop" Mar 4 01:03:30.545085 kubelet[2586]: E0304 01:03:30.542240 2586 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:03:30.642779 kubelet[2586]: E0304 01:03:30.642730 2586 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.679958 2586 cpu_manager.go:225] "Starting" policy="none" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.679999 2586 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.680021 2586 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.680296 2586 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.680312 2586 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.680342 2586 policy_none.go:50] "Start" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.680354 2586 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:03:30.680630 kubelet[2586]: I0304 01:03:30.680372 2586 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:03:30.681009 kubelet[2586]: I0304 01:03:30.680668 2586 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 01:03:30.681009 kubelet[2586]: I0304 01:03:30.680688 2586 policy_none.go:44] "Start" Mar 4 01:03:30.711330 kubelet[2586]: E0304 01:03:30.706879 2586 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:03:30.711330 kubelet[2586]: I0304 01:03:30.707305 2586 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 4 01:03:30.711330 kubelet[2586]: I0304 01:03:30.707326 2586 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:03:30.711330 kubelet[2586]: I0304 01:03:30.708470 2586 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 4 01:03:30.719003 kubelet[2586]: E0304 01:03:30.718961 2586 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:03:30.847840 kubelet[2586]: I0304 01:03:30.845887 2586 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:30.847840 kubelet[2586]: I0304 01:03:30.847026 2586 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:30.848463 kubelet[2586]: I0304 01:03:30.848091 2586 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:30.859779 kubelet[2586]: I0304 01:03:30.859735 2586 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:03:30.903482 kubelet[2586]: E0304 01:03:30.903333 2586 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:30.903775 kubelet[2586]: E0304 01:03:30.903545 2586 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:30.960230 kubelet[2586]: I0304 01:03:30.948428 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cd35a68b65c2f608029bd2f2089b76c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cd35a68b65c2f608029bd2f2089b76c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:30.960230 kubelet[2586]: I0304 01:03:30.948939 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cd35a68b65c2f608029bd2f2089b76c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cd35a68b65c2f608029bd2f2089b76c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:30.960230 kubelet[2586]: I0304 01:03:30.949156 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:30.960230 kubelet[2586]: I0304 01:03:30.949452 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:30.960230 kubelet[2586]: I0304 01:03:30.949485 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:30.964528 kubelet[2586]: I0304 01:03:30.949665 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:03:30.964528 kubelet[2586]: I0304 01:03:30.949762 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cd35a68b65c2f608029bd2f2089b76c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7cd35a68b65c2f608029bd2f2089b76c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:03:30.964528 kubelet[2586]: I0304 01:03:30.949791 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:30.964528 kubelet[2586]: I0304 01:03:30.949812 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:03:30.990987 kubelet[2586]: I0304 01:03:30.984106 2586 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 4 01:03:30.990987 kubelet[2586]: I0304 01:03:30.985398 2586 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 4 01:03:31.211068 kubelet[2586]: E0304 01:03:31.210038 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:31.211068 kubelet[2586]: I0304 01:03:31.210343 2586 apiserver.go:52] "Watching apiserver" Mar 4 01:03:31.211068 kubelet[2586]: E0304 01:03:31.210532 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:31.212369 kubelet[2586]: E0304 01:03:31.212208 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:31.329129 kubelet[2586]: I0304 01:03:31.328119 2586 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:03:31.417881 kubelet[2586]: I0304 01:03:31.417005 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.416933037 podStartE2EDuration="3.416933037s" podCreationTimestamp="2026-03-04 01:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:03:31.416466719 +0000 UTC m=+1.729559330" watchObservedRunningTime="2026-03-04 01:03:31.416933037 +0000 UTC m=+1.730025639" Mar 4 01:03:31.530888 kubelet[2586]: I0304 01:03:31.530550 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5305269940000001 podStartE2EDuration="1.530526994s" podCreationTimestamp="2026-03-04 01:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:03:31.530295091 +0000 UTC m=+1.843387702" watchObservedRunningTime="2026-03-04 01:03:31.530526994 +0000 UTC m=+1.843619595" Mar 4 01:03:31.530888 kubelet[2586]: I0304 01:03:31.530814 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.530802137 podStartE2EDuration="7.530802137s" podCreationTimestamp="2026-03-04 01:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:03:31.501104303 +0000 UTC m=+1.814196904" watchObservedRunningTime="2026-03-04 01:03:31.530802137 +0000 UTC m=+1.843894737" Mar 4 01:03:31.609925 kubelet[2586]: E0304 01:03:31.609721 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:31.609925 kubelet[2586]: E0304 01:03:31.609750 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:31.610782 kubelet[2586]: E0304 01:03:31.609950 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:32.725250 kubelet[2586]: E0304 01:03:32.723434 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:32.790555 kubelet[2586]: E0304 01:03:32.725148 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:34.821378 kubelet[2586]: I0304 01:03:34.820854 2586 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:03:34.827119 kubelet[2586]: I0304 01:03:34.825959 2586 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:03:34.828877 containerd[1494]: time="2026-03-04T01:03:34.825339273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:03:35.589126 systemd[1]: Created slice kubepods-besteffort-pod30f184c5_12cc_4a2a_9c2e_ea20e2f33c82.slice - libcontainer container kubepods-besteffort-pod30f184c5_12cc_4a2a_9c2e_ea20e2f33c82.slice. Mar 4 01:03:35.712673 kubelet[2586]: I0304 01:03:35.710797 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30f184c5-12cc-4a2a-9c2e-ea20e2f33c82-kube-proxy\") pod \"kube-proxy-9nxct\" (UID: \"30f184c5-12cc-4a2a-9c2e-ea20e2f33c82\") " pod="kube-system/kube-proxy-9nxct" Mar 4 01:03:35.712673 kubelet[2586]: I0304 01:03:35.710875 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30f184c5-12cc-4a2a-9c2e-ea20e2f33c82-xtables-lock\") pod \"kube-proxy-9nxct\" (UID: \"30f184c5-12cc-4a2a-9c2e-ea20e2f33c82\") " pod="kube-system/kube-proxy-9nxct" Mar 4 01:03:35.712673 kubelet[2586]: I0304 01:03:35.710914 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30f184c5-12cc-4a2a-9c2e-ea20e2f33c82-lib-modules\") pod \"kube-proxy-9nxct\" (UID: \"30f184c5-12cc-4a2a-9c2e-ea20e2f33c82\") " pod="kube-system/kube-proxy-9nxct" Mar 4 01:03:35.712673 kubelet[2586]: I0304 01:03:35.711002 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqdg\" (UniqueName: \"kubernetes.io/projected/30f184c5-12cc-4a2a-9c2e-ea20e2f33c82-kube-api-access-pxqdg\") pod \"kube-proxy-9nxct\" (UID: \"30f184c5-12cc-4a2a-9c2e-ea20e2f33c82\") " pod="kube-system/kube-proxy-9nxct" Mar 4 01:03:35.940037 kubelet[2586]: E0304 01:03:35.934190 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:35.945157 containerd[1494]: time="2026-03-04T01:03:35.944360854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nxct,Uid:30f184c5-12cc-4a2a-9c2e-ea20e2f33c82,Namespace:kube-system,Attempt:0,}" Mar 4 01:03:36.972172 containerd[1494]: time="2026-03-04T01:03:36.959298612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:36.972172 containerd[1494]: time="2026-03-04T01:03:36.960097420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:36.972172 containerd[1494]: time="2026-03-04T01:03:36.960122587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:37.113448 containerd[1494]: time="2026-03-04T01:03:37.054407771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:37.268952 kubelet[2586]: I0304 01:03:37.230830 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e843f19-5ca5-4ff7-9e0a-988e1372651a-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-w9v2z\" (UID: \"7e843f19-5ca5-4ff7-9e0a-988e1372651a\") " pod="tigera-operator/tigera-operator-6cf4cccc57-w9v2z" Mar 4 01:03:37.467971 kubelet[2586]: I0304 01:03:37.312534 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbdd\" (UniqueName: \"kubernetes.io/projected/7e843f19-5ca5-4ff7-9e0a-988e1372651a-kube-api-access-hwbdd\") pod \"tigera-operator-6cf4cccc57-w9v2z\" (UID: \"7e843f19-5ca5-4ff7-9e0a-988e1372651a\") " pod="tigera-operator/tigera-operator-6cf4cccc57-w9v2z" Mar 4 01:03:37.745049 systemd[1]: Created slice kubepods-besteffort-pod7e843f19_5ca5_4ff7_9e0a_988e1372651a.slice - libcontainer container kubepods-besteffort-pod7e843f19_5ca5_4ff7_9e0a_988e1372651a.slice. Mar 4 01:03:37.885884 systemd[1]: Started cri-containerd-1064aa09b63615c48043e5fcda324ac369fde1e27241a5260f21c47b340fb2b9.scope - libcontainer container 1064aa09b63615c48043e5fcda324ac369fde1e27241a5260f21c47b340fb2b9. Mar 4 01:03:38.019512 containerd[1494]: time="2026-03-04T01:03:38.018183231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nxct,Uid:30f184c5-12cc-4a2a-9c2e-ea20e2f33c82,Namespace:kube-system,Attempt:0,} returns sandbox id \"1064aa09b63615c48043e5fcda324ac369fde1e27241a5260f21c47b340fb2b9\"" Mar 4 01:03:38.022653 kubelet[2586]: E0304 01:03:38.022403 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:38.039229 containerd[1494]: time="2026-03-04T01:03:38.039075464Z" level=info msg="CreateContainer within sandbox \"1064aa09b63615c48043e5fcda324ac369fde1e27241a5260f21c47b340fb2b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:03:38.063241 containerd[1494]: time="2026-03-04T01:03:38.063087645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-w9v2z,Uid:7e843f19-5ca5-4ff7-9e0a-988e1372651a,Namespace:tigera-operator,Attempt:0,}" Mar 4 01:03:38.110348 containerd[1494]: time="2026-03-04T01:03:38.108985929Z" level=info msg="CreateContainer within sandbox \"1064aa09b63615c48043e5fcda324ac369fde1e27241a5260f21c47b340fb2b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"066caacf867eaecfc5420d5fc403e360689ed10f1597f21afeddf891a93dad08\"" Mar 4 01:03:38.115429 containerd[1494]: time="2026-03-04T01:03:38.115209422Z" level=info msg="StartContainer for \"066caacf867eaecfc5420d5fc403e360689ed10f1597f21afeddf891a93dad08\"" Mar 4 01:03:38.204963 systemd[1]: Started cri-containerd-066caacf867eaecfc5420d5fc403e360689ed10f1597f21afeddf891a93dad08.scope - libcontainer container 066caacf867eaecfc5420d5fc403e360689ed10f1597f21afeddf891a93dad08. Mar 4 01:03:38.467425 containerd[1494]: time="2026-03-04T01:03:38.467126204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:38.467425 containerd[1494]: time="2026-03-04T01:03:38.467252281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:38.468143 containerd[1494]: time="2026-03-04T01:03:38.467382668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:38.468143 containerd[1494]: time="2026-03-04T01:03:38.467525306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:38.578733 systemd[1]: run-containerd-runc-k8s.io-8d3f79fd33ed07bb15ab7b057aeb77ac467d98d80d8b922da928315795730b54-runc.hBvjUU.mount: Deactivated successfully. Mar 4 01:03:38.619090 systemd[1]: Started cri-containerd-8d3f79fd33ed07bb15ab7b057aeb77ac467d98d80d8b922da928315795730b54.scope - libcontainer container 8d3f79fd33ed07bb15ab7b057aeb77ac467d98d80d8b922da928315795730b54. Mar 4 01:03:38.866233 containerd[1494]: time="2026-03-04T01:03:38.865877826Z" level=info msg="StartContainer for \"066caacf867eaecfc5420d5fc403e360689ed10f1597f21afeddf891a93dad08\" returns successfully" Mar 4 01:03:39.254139 kubelet[2586]: E0304 01:03:39.251096 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:39.354477 kubelet[2586]: I0304 01:03:39.342477 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9nxct" podStartSLOduration=4.342456274 podStartE2EDuration="4.342456274s" podCreationTimestamp="2026-03-04 01:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:03:39.342111484 +0000 UTC m=+9.655204095" watchObservedRunningTime="2026-03-04 01:03:39.342456274 +0000 UTC m=+9.655548875" Mar 4 01:03:39.426196 containerd[1494]: time="2026-03-04T01:03:39.425988820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-w9v2z,Uid:7e843f19-5ca5-4ff7-9e0a-988e1372651a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d3f79fd33ed07bb15ab7b057aeb77ac467d98d80d8b922da928315795730b54\"" Mar 4 01:03:39.431909 containerd[1494]: time="2026-03-04T01:03:39.431231385Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 01:03:41.131702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77622752.mount: Deactivated successfully. Mar 4 01:03:45.040951 containerd[1494]: time="2026-03-04T01:03:45.039108446Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:45.040951 containerd[1494]: time="2026-03-04T01:03:45.040300898Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 4 01:03:45.042638 containerd[1494]: time="2026-03-04T01:03:45.042499780Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:45.047535 containerd[1494]: time="2026-03-04T01:03:45.047381024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:45.049219 containerd[1494]: time="2026-03-04T01:03:45.049018088Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.6177482s" Mar 4 01:03:45.049219 containerd[1494]: time="2026-03-04T01:03:45.049103288Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 4 01:03:45.059356 containerd[1494]: time="2026-03-04T01:03:45.059221444Z" level=info msg="CreateContainer within sandbox \"8d3f79fd33ed07bb15ab7b057aeb77ac467d98d80d8b922da928315795730b54\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 01:03:45.109844 containerd[1494]: time="2026-03-04T01:03:45.108932089Z" level=info msg="CreateContainer within sandbox \"8d3f79fd33ed07bb15ab7b057aeb77ac467d98d80d8b922da928315795730b54\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"61fb9ca90f07fe7330a86d90c85bddd3d4ded656a1299f2754a5671bec47c6a0\"" Mar 4 01:03:45.115014 containerd[1494]: time="2026-03-04T01:03:45.114858398Z" level=info msg="StartContainer for \"61fb9ca90f07fe7330a86d90c85bddd3d4ded656a1299f2754a5671bec47c6a0\"" Mar 4 01:03:45.272910 systemd[1]: Started cri-containerd-61fb9ca90f07fe7330a86d90c85bddd3d4ded656a1299f2754a5671bec47c6a0.scope - libcontainer container 61fb9ca90f07fe7330a86d90c85bddd3d4ded656a1299f2754a5671bec47c6a0. Mar 4 01:03:45.368757 containerd[1494]: time="2026-03-04T01:03:45.368440301Z" level=info msg="StartContainer for \"61fb9ca90f07fe7330a86d90c85bddd3d4ded656a1299f2754a5671bec47c6a0\" returns successfully" Mar 4 01:03:46.461278 kubelet[2586]: I0304 01:03:46.456783 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-w9v2z" podStartSLOduration=4.83644619 podStartE2EDuration="10.456765863s" podCreationTimestamp="2026-03-04 01:03:36 +0000 UTC" firstStartedPulling="2026-03-04 01:03:39.430782949 +0000 UTC m=+9.743875560" lastFinishedPulling="2026-03-04 01:03:45.051102622 +0000 UTC m=+15.364195233" observedRunningTime="2026-03-04 01:03:46.455809083 +0000 UTC m=+16.768901684" watchObservedRunningTime="2026-03-04 01:03:46.456765863 +0000 UTC m=+16.769858464" Mar 4 01:03:55.462410 sudo[1669]: pam_unix(sudo:session): session closed for user root Mar 4 01:03:55.470034 sshd[1665]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:55.476941 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:03:55.480961 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:56000.service: Deactivated successfully. Mar 4 01:03:55.500068 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:03:55.500494 systemd[1]: session-9.scope: Consumed 19.927s CPU time, 161.3M memory peak, 0B memory swap peak. Mar 4 01:03:55.506227 systemd-logind[1464]: Removed session 9. Mar 4 01:03:57.913493 systemd[1]: Created slice kubepods-besteffort-podf5d3cc09_ee37_4ea8_b59a_7e04001d7d99.slice - libcontainer container kubepods-besteffort-podf5d3cc09_ee37_4ea8_b59a_7e04001d7d99.slice. Mar 4 01:03:57.959381 kubelet[2586]: I0304 01:03:57.959037 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d3cc09-ee37-4ea8-b59a-7e04001d7d99-tigera-ca-bundle\") pod \"calico-typha-55c9cf578c-tdwqm\" (UID: \"f5d3cc09-ee37-4ea8-b59a-7e04001d7d99\") " pod="calico-system/calico-typha-55c9cf578c-tdwqm" Mar 4 01:03:57.959381 kubelet[2586]: I0304 01:03:57.959162 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc57j\" (UniqueName: \"kubernetes.io/projected/f5d3cc09-ee37-4ea8-b59a-7e04001d7d99-kube-api-access-dc57j\") pod \"calico-typha-55c9cf578c-tdwqm\" (UID: \"f5d3cc09-ee37-4ea8-b59a-7e04001d7d99\") " pod="calico-system/calico-typha-55c9cf578c-tdwqm" Mar 4 01:03:57.959381 kubelet[2586]: I0304 01:03:57.959209 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5d3cc09-ee37-4ea8-b59a-7e04001d7d99-typha-certs\") pod \"calico-typha-55c9cf578c-tdwqm\" (UID: \"f5d3cc09-ee37-4ea8-b59a-7e04001d7d99\") " pod="calico-system/calico-typha-55c9cf578c-tdwqm" Mar 4 01:03:58.176431 systemd[1]: Created slice kubepods-besteffort-pod1daecdea_fe7d_47cf_ac51_b99c9edd5d3e.slice - libcontainer container kubepods-besteffort-pod1daecdea_fe7d_47cf_ac51_b99c9edd5d3e.slice. Mar 4 01:03:58.228452 kubelet[2586]: E0304 01:03:58.228096 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:58.243887 containerd[1494]: time="2026-03-04T01:03:58.243351747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c9cf578c-tdwqm,Uid:f5d3cc09-ee37-4ea8-b59a-7e04001d7d99,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:58.267849 kubelet[2586]: I0304 01:03:58.266387 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-node-certs\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.267849 kubelet[2586]: I0304 01:03:58.266456 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-var-lib-calico\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.267849 kubelet[2586]: I0304 01:03:58.266486 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gm5\" (UniqueName: \"kubernetes.io/projected/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-kube-api-access-j2gm5\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.267849 kubelet[2586]: I0304 01:03:58.266526 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-var-run-calico\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.267849 kubelet[2586]: I0304 01:03:58.266554 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-cni-net-dir\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268269 kubelet[2586]: I0304 01:03:58.266804 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-flexvol-driver-host\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268269 kubelet[2586]: I0304 01:03:58.266969 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-bpffs\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268269 kubelet[2586]: I0304 01:03:58.267010 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-cni-log-dir\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268269 kubelet[2586]: I0304 01:03:58.267038 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-lib-modules\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268269 kubelet[2586]: I0304 01:03:58.267062 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-policysync\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268793 kubelet[2586]: I0304 01:03:58.267235 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-sys-fs\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268793 kubelet[2586]: I0304 01:03:58.267316 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-cni-bin-dir\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268793 kubelet[2586]: I0304 01:03:58.267390 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-xtables-lock\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268793 kubelet[2586]: I0304 01:03:58.267467 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-nodeproc\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.268793 kubelet[2586]: I0304 01:03:58.267505 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1daecdea-fe7d-47cf-ac51-b99c9edd5d3e-tigera-ca-bundle\") pod \"calico-node-k48bj\" (UID: \"1daecdea-fe7d-47cf-ac51-b99c9edd5d3e\") " pod="calico-system/calico-node-k48bj" Mar 4 01:03:58.415030 containerd[1494]: time="2026-03-04T01:03:58.412217958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:58.415030 containerd[1494]: time="2026-03-04T01:03:58.412323057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:58.415030 containerd[1494]: time="2026-03-04T01:03:58.412341541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:58.415030 containerd[1494]: time="2026-03-04T01:03:58.412484900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:58.415296 kubelet[2586]: E0304 01:03:58.415231 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.415351 kubelet[2586]: W0304 01:03:58.415294 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.417718 kubelet[2586]: E0304 01:03:58.415860 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.419525 kubelet[2586]: E0304 01:03:58.419449 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.419525 kubelet[2586]: W0304 01:03:58.419516 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.419803 kubelet[2586]: E0304 01:03:58.419545 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.423452 kubelet[2586]: E0304 01:03:58.423338 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.423452 kubelet[2586]: W0304 01:03:58.423414 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.423452 kubelet[2586]: E0304 01:03:58.423443 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.428032 kubelet[2586]: E0304 01:03:58.426945 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.428032 kubelet[2586]: W0304 01:03:58.427015 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.428032 kubelet[2586]: E0304 01:03:58.427041 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.430348 kubelet[2586]: E0304 01:03:58.430243 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.430348 kubelet[2586]: W0304 01:03:58.430305 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.430348 kubelet[2586]: E0304 01:03:58.430332 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.432190 kubelet[2586]: E0304 01:03:58.432150 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:03:58.451007 kubelet[2586]: E0304 01:03:58.450736 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.451007 kubelet[2586]: W0304 01:03:58.450810 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.451007 kubelet[2586]: E0304 01:03:58.450841 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.465986 kubelet[2586]: E0304 01:03:58.465955 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.466144 kubelet[2586]: W0304 01:03:58.466122 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.466265 kubelet[2586]: E0304 01:03:58.466244 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.515105 containerd[1494]: time="2026-03-04T01:03:58.515050835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k48bj,Uid:1daecdea-fe7d-47cf-ac51-b99c9edd5d3e,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:58.520034 kubelet[2586]: E0304 01:03:58.519796 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.520034 kubelet[2586]: W0304 01:03:58.519824 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.520034 kubelet[2586]: E0304 01:03:58.519853 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.523452 kubelet[2586]: E0304 01:03:58.523287 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.523452 kubelet[2586]: W0304 01:03:58.523305 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.523452 kubelet[2586]: E0304 01:03:58.523326 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.531074 kubelet[2586]: E0304 01:03:58.530202 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.531074 kubelet[2586]: W0304 01:03:58.530227 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.531074 kubelet[2586]: E0304 01:03:58.530257 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.533451 systemd[1]: Started cri-containerd-6c3594ca6cfe7a8c0fb577862f4b247802903887c436d2c0cbf528a86ae15ff8.scope - libcontainer container 6c3594ca6cfe7a8c0fb577862f4b247802903887c436d2c0cbf528a86ae15ff8. Mar 4 01:03:58.546817 kubelet[2586]: E0304 01:03:58.544272 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.546817 kubelet[2586]: W0304 01:03:58.544298 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.546817 kubelet[2586]: E0304 01:03:58.544328 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.551426 kubelet[2586]: E0304 01:03:58.551154 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.551426 kubelet[2586]: W0304 01:03:58.551217 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.551426 kubelet[2586]: E0304 01:03:58.551241 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.553461 kubelet[2586]: E0304 01:03:58.553261 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.553461 kubelet[2586]: W0304 01:03:58.553312 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.553461 kubelet[2586]: E0304 01:03:58.553336 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.557425 kubelet[2586]: E0304 01:03:58.557244 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.557425 kubelet[2586]: W0304 01:03:58.557302 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.557425 kubelet[2586]: E0304 01:03:58.557327 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.560998 kubelet[2586]: E0304 01:03:58.558486 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.560998 kubelet[2586]: W0304 01:03:58.558500 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.560998 kubelet[2586]: E0304 01:03:58.558523 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.568080 kubelet[2586]: E0304 01:03:58.566147 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.568080 kubelet[2586]: W0304 01:03:58.567304 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.570164 kubelet[2586]: E0304 01:03:58.569110 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.576953 kubelet[2586]: E0304 01:03:58.576759 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.576953 kubelet[2586]: W0304 01:03:58.576785 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.576953 kubelet[2586]: E0304 01:03:58.576812 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.577348 kubelet[2586]: E0304 01:03:58.577297 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.577348 kubelet[2586]: W0304 01:03:58.577312 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.577348 kubelet[2586]: E0304 01:03:58.577332 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.579332 kubelet[2586]: E0304 01:03:58.579228 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.579332 kubelet[2586]: W0304 01:03:58.579245 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.579332 kubelet[2586]: E0304 01:03:58.579267 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.580967 kubelet[2586]: E0304 01:03:58.580876 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.580967 kubelet[2586]: W0304 01:03:58.580890 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.580967 kubelet[2586]: E0304 01:03:58.580909 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.603866 kubelet[2586]: E0304 01:03:58.587161 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.603866 kubelet[2586]: W0304 01:03:58.587350 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.603866 kubelet[2586]: E0304 01:03:58.587371 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.603866 kubelet[2586]: E0304 01:03:58.592168 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.603866 kubelet[2586]: W0304 01:03:58.592183 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.603866 kubelet[2586]: E0304 01:03:58.592201 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.613985 kubelet[2586]: E0304 01:03:58.613121 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.613985 kubelet[2586]: W0304 01:03:58.613146 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.613985 kubelet[2586]: E0304 01:03:58.613170 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.629089 kubelet[2586]: E0304 01:03:58.628976 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.629089 kubelet[2586]: W0304 01:03:58.629068 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.629283 kubelet[2586]: E0304 01:03:58.629106 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.632065 kubelet[2586]: E0304 01:03:58.630760 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.632065 kubelet[2586]: W0304 01:03:58.630777 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.632065 kubelet[2586]: E0304 01:03:58.630800 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.632390 kubelet[2586]: E0304 01:03:58.632237 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.632390 kubelet[2586]: W0304 01:03:58.632300 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.632390 kubelet[2586]: E0304 01:03:58.632325 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.636516 kubelet[2586]: E0304 01:03:58.635102 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.636516 kubelet[2586]: W0304 01:03:58.635122 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.636516 kubelet[2586]: E0304 01:03:58.635146 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.636516 kubelet[2586]: E0304 01:03:58.635879 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.636516 kubelet[2586]: W0304 01:03:58.635894 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.636516 kubelet[2586]: E0304 01:03:58.635914 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.639075 kubelet[2586]: E0304 01:03:58.638401 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.639075 kubelet[2586]: W0304 01:03:58.638420 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.639075 kubelet[2586]: E0304 01:03:58.638441 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.639790 kubelet[2586]: E0304 01:03:58.639553 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.639790 kubelet[2586]: W0304 01:03:58.639767 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.639790 kubelet[2586]: E0304 01:03:58.639788 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.641084 kubelet[2586]: I0304 01:03:58.640915 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dm6\" (UniqueName: \"kubernetes.io/projected/70e9d987-0384-4b7c-aa94-bbc127680682-kube-api-access-w9dm6\") pod \"csi-node-driver-b8wzc\" (UID: \"70e9d987-0384-4b7c-aa94-bbc127680682\") " pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:03:58.645181 kubelet[2586]: E0304 01:03:58.643320 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.645181 kubelet[2586]: W0304 01:03:58.643342 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.645181 kubelet[2586]: E0304 01:03:58.643361 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.646100 kubelet[2586]: E0304 01:03:58.645895 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.646161 kubelet[2586]: W0304 01:03:58.646100 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.646970 kubelet[2586]: E0304 01:03:58.646363 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.648369 kubelet[2586]: E0304 01:03:58.648292 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.648369 kubelet[2586]: W0304 01:03:58.648344 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.648369 kubelet[2586]: E0304 01:03:58.648360 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.655449 kubelet[2586]: I0304 01:03:58.655162 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70e9d987-0384-4b7c-aa94-bbc127680682-kubelet-dir\") pod \"csi-node-driver-b8wzc\" (UID: \"70e9d987-0384-4b7c-aa94-bbc127680682\") " pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:03:58.662506 kubelet[2586]: E0304 01:03:58.662323 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.662506 kubelet[2586]: W0304 01:03:58.662401 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.664039 kubelet[2586]: E0304 01:03:58.663750 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.664341 kubelet[2586]: I0304 01:03:58.664103 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/70e9d987-0384-4b7c-aa94-bbc127680682-registration-dir\") pod \"csi-node-driver-b8wzc\" (UID: \"70e9d987-0384-4b7c-aa94-bbc127680682\") " pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:03:58.667397 kubelet[2586]: E0304 01:03:58.667279 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.667397 kubelet[2586]: W0304 01:03:58.667354 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.667397 kubelet[2586]: E0304 01:03:58.667381 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.670421 kubelet[2586]: E0304 01:03:58.670196 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.670421 kubelet[2586]: W0304 01:03:58.670215 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.670421 kubelet[2586]: E0304 01:03:58.670235 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.674874 kubelet[2586]: E0304 01:03:58.673112 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.674874 kubelet[2586]: W0304 01:03:58.673130 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.674874 kubelet[2586]: E0304 01:03:58.673148 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.676253 kubelet[2586]: I0304 01:03:58.675763 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/70e9d987-0384-4b7c-aa94-bbc127680682-varrun\") pod \"csi-node-driver-b8wzc\" (UID: \"70e9d987-0384-4b7c-aa94-bbc127680682\") " pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:03:58.680878 kubelet[2586]: E0304 01:03:58.675898 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.680878 kubelet[2586]: W0304 01:03:58.680834 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.680878 kubelet[2586]: E0304 01:03:58.680862 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.707331 kubelet[2586]: E0304 01:03:58.704224 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.707331 kubelet[2586]: W0304 01:03:58.704257 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.707331 kubelet[2586]: E0304 01:03:58.704288 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.707331 kubelet[2586]: E0304 01:03:58.705973 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.707331 kubelet[2586]: W0304 01:03:58.705990 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.707331 kubelet[2586]: E0304 01:03:58.706011 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.707331 kubelet[2586]: I0304 01:03:58.706057 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/70e9d987-0384-4b7c-aa94-bbc127680682-socket-dir\") pod \"csi-node-driver-b8wzc\" (UID: \"70e9d987-0384-4b7c-aa94-bbc127680682\") " pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:03:58.708927 kubelet[2586]: E0304 01:03:58.708845 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.708927 kubelet[2586]: W0304 01:03:58.708908 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.708927 kubelet[2586]: E0304 01:03:58.708929 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.712374 containerd[1494]: time="2026-03-04T01:03:58.712011800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:58.712374 containerd[1494]: time="2026-03-04T01:03:58.712116276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:58.712374 containerd[1494]: time="2026-03-04T01:03:58.712135903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:58.712374 containerd[1494]: time="2026-03-04T01:03:58.712259115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:58.713796 kubelet[2586]: E0304 01:03:58.713477 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.713796 kubelet[2586]: W0304 01:03:58.713548 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.713796 kubelet[2586]: E0304 01:03:58.713737 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.790225 systemd[1]: Started cri-containerd-3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6.scope - libcontainer container 3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6. Mar 4 01:03:58.810323 kubelet[2586]: E0304 01:03:58.810289 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.810496 kubelet[2586]: W0304 01:03:58.810469 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.811001 kubelet[2586]: E0304 01:03:58.810795 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.811501 kubelet[2586]: E0304 01:03:58.811483 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.813118 kubelet[2586]: W0304 01:03:58.813097 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.813202 kubelet[2586]: E0304 01:03:58.813186 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.814001 containerd[1494]: time="2026-03-04T01:03:58.813881087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c9cf578c-tdwqm,Uid:f5d3cc09-ee37-4ea8-b59a-7e04001d7d99,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c3594ca6cfe7a8c0fb577862f4b247802903887c436d2c0cbf528a86ae15ff8\"" Mar 4 01:03:58.814772 kubelet[2586]: E0304 01:03:58.814443 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.814772 kubelet[2586]: W0304 01:03:58.814730 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.814904 kubelet[2586]: E0304 01:03:58.814836 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.814956 kubelet[2586]: E0304 01:03:58.814932 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:58.817741 kubelet[2586]: E0304 01:03:58.815780 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.817741 kubelet[2586]: W0304 01:03:58.815801 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.817741 kubelet[2586]: E0304 01:03:58.815951 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.817741 kubelet[2586]: E0304 01:03:58.817218 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.817741 kubelet[2586]: W0304 01:03:58.817231 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.817741 kubelet[2586]: E0304 01:03:58.817245 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.817979 containerd[1494]: time="2026-03-04T01:03:58.817140931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 01:03:58.818882 kubelet[2586]: E0304 01:03:58.818810 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.818882 kubelet[2586]: W0304 01:03:58.818880 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.818981 kubelet[2586]: E0304 01:03:58.818898 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.819435 kubelet[2586]: E0304 01:03:58.819366 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.819435 kubelet[2586]: W0304 01:03:58.819430 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.819533 kubelet[2586]: E0304 01:03:58.819447 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.820918 kubelet[2586]: E0304 01:03:58.820778 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.820918 kubelet[2586]: W0304 01:03:58.820839 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.820918 kubelet[2586]: E0304 01:03:58.820855 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.821804 kubelet[2586]: E0304 01:03:58.821446 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.821804 kubelet[2586]: W0304 01:03:58.821459 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.821804 kubelet[2586]: E0304 01:03:58.821469 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.821997 kubelet[2586]: E0304 01:03:58.821941 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.822047 kubelet[2586]: W0304 01:03:58.821999 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.822047 kubelet[2586]: E0304 01:03:58.822011 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.823801 kubelet[2586]: E0304 01:03:58.823335 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.823801 kubelet[2586]: W0304 01:03:58.823352 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.823801 kubelet[2586]: E0304 01:03:58.823368 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.824402 kubelet[2586]: E0304 01:03:58.824288 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.824402 kubelet[2586]: W0304 01:03:58.824360 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.824402 kubelet[2586]: E0304 01:03:58.824380 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.826983 kubelet[2586]: E0304 01:03:58.825936 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.826983 kubelet[2586]: W0304 01:03:58.825954 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.826983 kubelet[2586]: E0304 01:03:58.825971 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.827295 kubelet[2586]: E0304 01:03:58.827204 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.827295 kubelet[2586]: W0304 01:03:58.827262 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.827295 kubelet[2586]: E0304 01:03:58.827279 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.828313 kubelet[2586]: E0304 01:03:58.828227 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.828313 kubelet[2586]: W0304 01:03:58.828291 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.828313 kubelet[2586]: E0304 01:03:58.828307 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.830782 kubelet[2586]: E0304 01:03:58.830272 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.830782 kubelet[2586]: W0304 01:03:58.830293 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.830782 kubelet[2586]: E0304 01:03:58.830306 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.831911 kubelet[2586]: E0304 01:03:58.831550 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.831911 kubelet[2586]: W0304 01:03:58.831808 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.831911 kubelet[2586]: E0304 01:03:58.831825 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.834151 kubelet[2586]: E0304 01:03:58.834070 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.834151 kubelet[2586]: W0304 01:03:58.834135 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.834151 kubelet[2586]: E0304 01:03:58.834151 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.836822 kubelet[2586]: E0304 01:03:58.834902 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.836822 kubelet[2586]: W0304 01:03:58.834917 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.836822 kubelet[2586]: E0304 01:03:58.834930 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.836822 kubelet[2586]: E0304 01:03:58.835522 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.836822 kubelet[2586]: W0304 01:03:58.835536 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.836822 kubelet[2586]: E0304 01:03:58.835548 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.836822 kubelet[2586]: E0304 01:03:58.836113 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.836822 kubelet[2586]: W0304 01:03:58.836124 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.836822 kubelet[2586]: E0304 01:03:58.836136 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.837498 kubelet[2586]: E0304 01:03:58.837194 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.837498 kubelet[2586]: W0304 01:03:58.837207 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.837498 kubelet[2586]: E0304 01:03:58.837221 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.839308 kubelet[2586]: E0304 01:03:58.839200 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.839308 kubelet[2586]: W0304 01:03:58.839214 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.839308 kubelet[2586]: E0304 01:03:58.839225 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.840399 kubelet[2586]: E0304 01:03:58.840318 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.840399 kubelet[2586]: W0304 01:03:58.840375 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.840399 kubelet[2586]: E0304 01:03:58.840391 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.841518 kubelet[2586]: E0304 01:03:58.841384 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.841518 kubelet[2586]: W0304 01:03:58.841437 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.841518 kubelet[2586]: E0304 01:03:58.841452 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.866903 kubelet[2586]: E0304 01:03:58.865840 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:03:58.866903 kubelet[2586]: W0304 01:03:58.865867 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:03:58.866903 kubelet[2586]: E0304 01:03:58.865891 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:03:58.939792 containerd[1494]: time="2026-03-04T01:03:58.933226680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k48bj,Uid:1daecdea-fe7d-47cf-ac51-b99c9edd5d3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\"" Mar 4 01:03:59.548126 kubelet[2586]: E0304 01:03:59.544237 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:00.635152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116993099.mount: Deactivated successfully. Mar 4 01:04:01.553085 kubelet[2586]: E0304 01:04:01.551048 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:02.518047 containerd[1494]: time="2026-03-04T01:04:02.516945692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.518739 containerd[1494]: time="2026-03-04T01:04:02.518530049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 4 01:04:02.520863 containerd[1494]: time="2026-03-04T01:04:02.520785623Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.524305 containerd[1494]: time="2026-03-04T01:04:02.524213623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.525619 containerd[1494]: time="2026-03-04T01:04:02.525534978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.708363391s" Mar 4 01:04:02.525734 containerd[1494]: time="2026-03-04T01:04:02.525694327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 4 01:04:02.528409 containerd[1494]: time="2026-03-04T01:04:02.528314322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 01:04:02.565280 containerd[1494]: time="2026-03-04T01:04:02.565217796Z" level=info msg="CreateContainer within sandbox \"6c3594ca6cfe7a8c0fb577862f4b247802903887c436d2c0cbf528a86ae15ff8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 01:04:02.602962 containerd[1494]: time="2026-03-04T01:04:02.602489660Z" level=info msg="CreateContainer within sandbox \"6c3594ca6cfe7a8c0fb577862f4b247802903887c436d2c0cbf528a86ae15ff8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f462fbbe7d3de1c1026b3777ee9207e4a2ec1b568f32a5863468bc6a5cd9fb02\"" Mar 4 01:04:02.604040 containerd[1494]: time="2026-03-04T01:04:02.603947132Z" level=info msg="StartContainer for \"f462fbbe7d3de1c1026b3777ee9207e4a2ec1b568f32a5863468bc6a5cd9fb02\"" Mar 4 01:04:02.707378 systemd[1]: Started cri-containerd-f462fbbe7d3de1c1026b3777ee9207e4a2ec1b568f32a5863468bc6a5cd9fb02.scope - libcontainer container f462fbbe7d3de1c1026b3777ee9207e4a2ec1b568f32a5863468bc6a5cd9fb02. Mar 4 01:04:02.814510 containerd[1494]: time="2026-03-04T01:04:02.814173732Z" level=info msg="StartContainer for \"f462fbbe7d3de1c1026b3777ee9207e4a2ec1b568f32a5863468bc6a5cd9fb02\" returns successfully" Mar 4 01:04:03.270863 kubelet[2586]: E0304 01:04:03.269072 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:03.311793 kubelet[2586]: I0304 01:04:03.311553 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-55c9cf578c-tdwqm" podStartSLOduration=2.599859519 podStartE2EDuration="6.311537893s" podCreationTimestamp="2026-03-04 01:03:57 +0000 UTC" firstStartedPulling="2026-03-04 01:03:58.8161025 +0000 UTC m=+29.129195100" lastFinishedPulling="2026-03-04 01:04:02.527780853 +0000 UTC m=+32.840873474" observedRunningTime="2026-03-04 01:04:03.30903026 +0000 UTC m=+33.622122872" watchObservedRunningTime="2026-03-04 01:04:03.311537893 +0000 UTC m=+33.624630514" Mar 4 01:04:03.320039 kubelet[2586]: E0304 01:04:03.319553 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.320039 kubelet[2586]: W0304 01:04:03.319712 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.320039 kubelet[2586]: E0304 01:04:03.319743 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.321824 kubelet[2586]: E0304 01:04:03.321099 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.321824 kubelet[2586]: W0304 01:04:03.321228 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.321824 kubelet[2586]: E0304 01:04:03.321241 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.321824 kubelet[2586]: E0304 01:04:03.321766 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.321824 kubelet[2586]: W0304 01:04:03.321777 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.322110 kubelet[2586]: E0304 01:04:03.321858 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.323391 kubelet[2586]: E0304 01:04:03.323269 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.323391 kubelet[2586]: W0304 01:04:03.323285 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.323391 kubelet[2586]: E0304 01:04:03.323300 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.324160 kubelet[2586]: E0304 01:04:03.324014 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.324160 kubelet[2586]: W0304 01:04:03.324031 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.324160 kubelet[2586]: E0304 01:04:03.324043 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.325479 kubelet[2586]: E0304 01:04:03.324554 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.325479 kubelet[2586]: W0304 01:04:03.324632 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.325479 kubelet[2586]: E0304 01:04:03.324643 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.325479 kubelet[2586]: E0304 01:04:03.325216 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.325479 kubelet[2586]: W0304 01:04:03.325226 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.325479 kubelet[2586]: E0304 01:04:03.325236 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.325842 kubelet[2586]: E0304 01:04:03.325765 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.325842 kubelet[2586]: W0304 01:04:03.325775 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.325842 kubelet[2586]: E0304 01:04:03.325785 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.327022 kubelet[2586]: E0304 01:04:03.326274 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.327022 kubelet[2586]: W0304 01:04:03.326287 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.327022 kubelet[2586]: E0304 01:04:03.326304 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.327022 kubelet[2586]: E0304 01:04:03.326989 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.327022 kubelet[2586]: W0304 01:04:03.327003 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.327022 kubelet[2586]: E0304 01:04:03.327016 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.328091 kubelet[2586]: E0304 01:04:03.327747 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.328091 kubelet[2586]: W0304 01:04:03.327797 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.328091 kubelet[2586]: E0304 01:04:03.327808 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.328900 kubelet[2586]: E0304 01:04:03.328796 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.328900 kubelet[2586]: W0304 01:04:03.328816 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.328900 kubelet[2586]: E0304 01:04:03.328828 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.330155 kubelet[2586]: E0304 01:04:03.329994 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.330155 kubelet[2586]: W0304 01:04:03.330067 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.330155 kubelet[2586]: E0304 01:04:03.330100 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.330914 kubelet[2586]: E0304 01:04:03.330831 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.330914 kubelet[2586]: W0304 01:04:03.330851 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.330914 kubelet[2586]: E0304 01:04:03.330873 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.332107 kubelet[2586]: E0304 01:04:03.331525 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.332107 kubelet[2586]: W0304 01:04:03.331798 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.332107 kubelet[2586]: E0304 01:04:03.331822 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.333836 kubelet[2586]: E0304 01:04:03.332766 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.333836 kubelet[2586]: W0304 01:04:03.333133 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.333836 kubelet[2586]: E0304 01:04:03.333156 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.333985 kubelet[2586]: E0304 01:04:03.333937 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.333985 kubelet[2586]: W0304 01:04:03.333954 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.333985 kubelet[2586]: E0304 01:04:03.333970 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.335515 kubelet[2586]: E0304 01:04:03.334712 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.335515 kubelet[2586]: W0304 01:04:03.334737 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.335515 kubelet[2586]: E0304 01:04:03.334755 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.335515 kubelet[2586]: E0304 01:04:03.335286 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.335515 kubelet[2586]: W0304 01:04:03.335300 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.335515 kubelet[2586]: E0304 01:04:03.335317 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.336299 kubelet[2586]: E0304 01:04:03.336233 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.336299 kubelet[2586]: W0304 01:04:03.336286 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.336405 kubelet[2586]: E0304 01:04:03.336305 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.337023 kubelet[2586]: E0304 01:04:03.336912 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.337023 kubelet[2586]: W0304 01:04:03.336930 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.337023 kubelet[2586]: E0304 01:04:03.336944 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.338305 kubelet[2586]: E0304 01:04:03.337906 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.338305 kubelet[2586]: W0304 01:04:03.337920 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.338305 kubelet[2586]: E0304 01:04:03.337935 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.342960 kubelet[2586]: E0304 01:04:03.338901 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.342960 kubelet[2586]: W0304 01:04:03.338953 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.342960 kubelet[2586]: E0304 01:04:03.338969 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.342960 kubelet[2586]: E0304 01:04:03.340379 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.342960 kubelet[2586]: W0304 01:04:03.340392 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.342960 kubelet[2586]: E0304 01:04:03.340404 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.342960 kubelet[2586]: E0304 01:04:03.342865 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.342960 kubelet[2586]: W0304 01:04:03.342877 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.342960 kubelet[2586]: E0304 01:04:03.342891 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.344862 kubelet[2586]: E0304 01:04:03.344404 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.344862 kubelet[2586]: W0304 01:04:03.344464 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.348079 kubelet[2586]: E0304 01:04:03.344548 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.350978 kubelet[2586]: E0304 01:04:03.350922 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.350978 kubelet[2586]: W0304 01:04:03.350946 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.350978 kubelet[2586]: E0304 01:04:03.350964 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.367739 containerd[1494]: time="2026-03-04T01:04:03.367553825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:03.373452 containerd[1494]: time="2026-03-04T01:04:03.372906212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 4 01:04:03.424324 containerd[1494]: time="2026-03-04T01:04:03.423941762Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:03.431782 containerd[1494]: time="2026-03-04T01:04:03.431133567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:03.437987 containerd[1494]: time="2026-03-04T01:04:03.432092137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 903.736697ms" Mar 4 01:04:03.437987 containerd[1494]: time="2026-03-04T01:04:03.432135418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.440827 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.443019 kubelet[2586]: W0304 01:04:03.440847 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.440911 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.441736 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.443019 kubelet[2586]: W0304 01:04:03.441751 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.441772 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.442007 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.443019 kubelet[2586]: W0304 01:04:03.442016 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.442025 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.443019 kubelet[2586]: E0304 01:04:03.442318 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.443935 kubelet[2586]: W0304 01:04:03.442330 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.443935 kubelet[2586]: E0304 01:04:03.442348 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.443935 kubelet[2586]: E0304 01:04:03.442722 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.443935 kubelet[2586]: W0304 01:04:03.442732 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.443935 kubelet[2586]: E0304 01:04:03.442743 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.443935 kubelet[2586]: E0304 01:04:03.443553 2586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:04:03.443935 kubelet[2586]: W0304 01:04:03.443653 2586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:04:03.443935 kubelet[2586]: E0304 01:04:03.443703 2586 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:04:03.460371 containerd[1494]: time="2026-03-04T01:04:03.458756822Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 01:04:03.541091 containerd[1494]: time="2026-03-04T01:04:03.540906672Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c\"" Mar 4 01:04:03.542876 kubelet[2586]: E0304 01:04:03.542472 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:03.543301 containerd[1494]: time="2026-03-04T01:04:03.543199736Z" level=info msg="StartContainer for \"4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c\"" Mar 4 01:04:03.632310 systemd[1]: Started cri-containerd-4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c.scope - libcontainer container 4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c. Mar 4 01:04:03.716817 containerd[1494]: time="2026-03-04T01:04:03.716532040Z" level=info msg="StartContainer for \"4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c\" returns successfully" Mar 4 01:04:03.748055 systemd[1]: cri-containerd-4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c.scope: Deactivated successfully. Mar 4 01:04:03.827271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c-rootfs.mount: Deactivated successfully. Mar 4 01:04:03.872634 containerd[1494]: time="2026-03-04T01:04:03.868196443Z" level=info msg="shim disconnected" id=4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c namespace=k8s.io Mar 4 01:04:03.872840 containerd[1494]: time="2026-03-04T01:04:03.872742296Z" level=warning msg="cleaning up after shim disconnected" id=4376ed218319878a1322d7557b7d47ac3bee6a8bc1c3a3a9a3b721859a82cd8c namespace=k8s.io Mar 4 01:04:03.872840 containerd[1494]: time="2026-03-04T01:04:03.872763186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:04:03.927307 containerd[1494]: time="2026-03-04T01:04:03.926730478Z" level=warning msg="cleanup warnings time=\"2026-03-04T01:04:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 01:04:04.281059 kubelet[2586]: E0304 01:04:04.280930 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:04.292500 containerd[1494]: time="2026-03-04T01:04:04.286861882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 01:04:05.281526 kubelet[2586]: E0304 01:04:05.281312 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:05.544991 kubelet[2586]: E0304 01:04:05.543463 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:08.512243 kubelet[2586]: E0304 01:04:08.509487 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:10.553265 kubelet[2586]: E0304 01:04:10.553123 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:12.546092 kubelet[2586]: E0304 01:04:12.545212 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:14.940232 kubelet[2586]: E0304 01:04:14.939979 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:16.686021 kubelet[2586]: E0304 01:04:16.679690 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:19.759713 kubelet[2586]: E0304 01:04:19.759309 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:21.544725 kubelet[2586]: E0304 01:04:21.544299 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:23.543501 kubelet[2586]: E0304 01:04:23.543283 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:25.232301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139303526.mount: Deactivated successfully. Mar 4 01:04:25.311686 containerd[1494]: time="2026-03-04T01:04:25.310789757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:25.314070 containerd[1494]: time="2026-03-04T01:04:25.313525231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 4 01:04:25.315556 containerd[1494]: time="2026-03-04T01:04:25.315481726Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:25.321061 containerd[1494]: time="2026-03-04T01:04:25.320685336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:25.321960 containerd[1494]: time="2026-03-04T01:04:25.321860266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 21.034953019s" Mar 4 01:04:25.324861 containerd[1494]: time="2026-03-04T01:04:25.324752123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 4 01:04:25.341752 containerd[1494]: time="2026-03-04T01:04:25.341535409Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 01:04:25.383981 containerd[1494]: time="2026-03-04T01:04:25.383815413Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93\"" Mar 4 01:04:25.403288 containerd[1494]: time="2026-03-04T01:04:25.403178795Z" level=info msg="StartContainer for \"a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93\"" Mar 4 01:04:25.542417 kubelet[2586]: E0304 01:04:25.542339 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:25.597337 systemd[1]: Started cri-containerd-a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93.scope - libcontainer container a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93. Mar 4 01:04:25.679330 containerd[1494]: time="2026-03-04T01:04:25.679288512Z" level=info msg="StartContainer for \"a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93\" returns successfully" Mar 4 01:04:25.923150 systemd[1]: cri-containerd-a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93.scope: Deactivated successfully. Mar 4 01:04:26.150096 containerd[1494]: time="2026-03-04T01:04:26.149248509Z" level=info msg="shim disconnected" id=a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93 namespace=k8s.io Mar 4 01:04:26.150096 containerd[1494]: time="2026-03-04T01:04:26.149871196Z" level=warning msg="cleaning up after shim disconnected" id=a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93 namespace=k8s.io Mar 4 01:04:26.150096 containerd[1494]: time="2026-03-04T01:04:26.149947328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:04:26.232191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a845b9cb9fe43a812353a6ebc39f392f6485398e6691271ee2ea36afc13edb93-rootfs.mount: Deactivated successfully. Mar 4 01:04:26.910353 containerd[1494]: time="2026-03-04T01:04:26.910307089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 01:04:27.543201 kubelet[2586]: E0304 01:04:27.543071 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:29.543899 kubelet[2586]: E0304 01:04:29.543174 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:31.544729 kubelet[2586]: E0304 01:04:31.542478 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:31.658823 containerd[1494]: time="2026-03-04T01:04:31.657034140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:31.677281 containerd[1494]: time="2026-03-04T01:04:31.677137348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 4 01:04:31.679221 containerd[1494]: time="2026-03-04T01:04:31.679100706Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:31.684168 containerd[1494]: time="2026-03-04T01:04:31.684042561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:31.685359 containerd[1494]: time="2026-03-04T01:04:31.685250893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.773558511s" Mar 4 01:04:31.685359 containerd[1494]: time="2026-03-04T01:04:31.685342124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 4 01:04:31.714878 containerd[1494]: time="2026-03-04T01:04:31.714768982Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 01:04:31.752032 containerd[1494]: time="2026-03-04T01:04:31.751829554Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15\"" Mar 4 01:04:31.753271 containerd[1494]: time="2026-03-04T01:04:31.753107017Z" level=info msg="StartContainer for \"59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15\"" Mar 4 01:04:31.867827 systemd[1]: Started cri-containerd-59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15.scope - libcontainer container 59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15. Mar 4 01:04:31.932017 containerd[1494]: time="2026-03-04T01:04:31.931810799Z" level=info msg="StartContainer for \"59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15\" returns successfully" Mar 4 01:04:32.966316 systemd[1]: cri-containerd-59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15.scope: Deactivated successfully. Mar 4 01:04:32.966758 systemd[1]: cri-containerd-59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15.scope: Consumed 1.309s CPU time. Mar 4 01:04:33.040431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15-rootfs.mount: Deactivated successfully. Mar 4 01:04:33.049678 kubelet[2586]: I0304 01:04:33.048034 2586 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 4 01:04:33.108413 containerd[1494]: time="2026-03-04T01:04:33.108136999Z" level=info msg="shim disconnected" id=59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15 namespace=k8s.io Mar 4 01:04:33.108413 containerd[1494]: time="2026-03-04T01:04:33.108195819Z" level=warning msg="cleaning up after shim disconnected" id=59d20e311540cb897422660342a7f5aa126d38bb1c9b923c95245673c530cf15 namespace=k8s.io Mar 4 01:04:33.108413 containerd[1494]: time="2026-03-04T01:04:33.108205848Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:04:33.187503 systemd[1]: Created slice kubepods-burstable-podaadc3f76_505a_444f_bf40_597198f092e8.slice - libcontainer container kubepods-burstable-podaadc3f76_505a_444f_bf40_597198f092e8.slice. Mar 4 01:04:33.209867 systemd[1]: Created slice kubepods-besteffort-podac58af22_43eb_4c31_9dcb_bdf8a11f443d.slice - libcontainer container kubepods-besteffort-podac58af22_43eb_4c31_9dcb_bdf8a11f443d.slice. Mar 4 01:04:33.220008 systemd[1]: Created slice kubepods-besteffort-pod544c68eb_a5e1_4a51_ad76_27bba7dba868.slice - libcontainer container kubepods-besteffort-pod544c68eb_a5e1_4a51_ad76_27bba7dba868.slice. Mar 4 01:04:33.237309 systemd[1]: Created slice kubepods-burstable-pod33342b7d_98dc_47af_8bcb_0cd875c1acc0.slice - libcontainer container kubepods-burstable-pod33342b7d_98dc_47af_8bcb_0cd875c1acc0.slice. Mar 4 01:04:33.253095 kubelet[2586]: I0304 01:04:33.252237 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33342b7d-98dc-47af-8bcb-0cd875c1acc0-config-volume\") pod \"coredns-7d764666f9-4r8bd\" (UID: \"33342b7d-98dc-47af-8bcb-0cd875c1acc0\") " pod="kube-system/coredns-7d764666f9-4r8bd" Mar 4 01:04:33.253095 kubelet[2586]: I0304 01:04:33.252292 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72e16b4b-2414-436d-9047-7c572057b1a0-calico-apiserver-certs\") pod \"calico-apiserver-7d448765fc-8j94r\" (UID: \"72e16b4b-2414-436d-9047-7c572057b1a0\") " pod="calico-system/calico-apiserver-7d448765fc-8j94r" Mar 4 01:04:33.253095 kubelet[2586]: I0304 01:04:33.252391 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwx6b\" (UniqueName: \"kubernetes.io/projected/aadc3f76-505a-444f-bf40-597198f092e8-kube-api-access-kwx6b\") pod \"coredns-7d764666f9-jtjj8\" (UID: \"aadc3f76-505a-444f-bf40-597198f092e8\") " pod="kube-system/coredns-7d764666f9-jtjj8" Mar 4 01:04:33.253095 kubelet[2586]: I0304 01:04:33.252423 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac58af22-43eb-4c31-9dcb-bdf8a11f443d-tigera-ca-bundle\") pod \"calico-kube-controllers-cd8cffdcd-9vxkx\" (UID: \"ac58af22-43eb-4c31-9dcb-bdf8a11f443d\") " pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" Mar 4 01:04:33.253095 kubelet[2586]: I0304 01:04:33.252455 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-nginx-config\") pod \"whisker-674c968669-fd8l6\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " pod="calico-system/whisker-674c968669-fd8l6" Mar 4 01:04:33.254882 kubelet[2586]: I0304 01:04:33.252484 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kbg\" (UniqueName: \"kubernetes.io/projected/72e16b4b-2414-436d-9047-7c572057b1a0-kube-api-access-62kbg\") pod \"calico-apiserver-7d448765fc-8j94r\" (UID: \"72e16b4b-2414-436d-9047-7c572057b1a0\") " pod="calico-system/calico-apiserver-7d448765fc-8j94r" Mar 4 01:04:33.254882 kubelet[2586]: I0304 01:04:33.252675 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544c68eb-a5e1-4a51-ad76-27bba7dba868-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-n4mqq\" (UID: \"544c68eb-a5e1-4a51-ad76-27bba7dba868\") " pod="calico-system/goldmane-9f7667bb8-n4mqq" Mar 4 01:04:33.254882 kubelet[2586]: I0304 01:04:33.252704 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-backend-key-pair\") pod \"whisker-674c968669-fd8l6\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " pod="calico-system/whisker-674c968669-fd8l6" Mar 4 01:04:33.254882 kubelet[2586]: I0304 01:04:33.252727 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-ca-bundle\") pod \"whisker-674c968669-fd8l6\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " pod="calico-system/whisker-674c968669-fd8l6" Mar 4 01:04:33.254882 kubelet[2586]: I0304 01:04:33.252753 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zw8p\" (UniqueName: \"kubernetes.io/projected/544c68eb-a5e1-4a51-ad76-27bba7dba868-kube-api-access-2zw8p\") pod \"goldmane-9f7667bb8-n4mqq\" (UID: \"544c68eb-a5e1-4a51-ad76-27bba7dba868\") " pod="calico-system/goldmane-9f7667bb8-n4mqq" Mar 4 01:04:33.255056 kubelet[2586]: I0304 01:04:33.252782 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8hs\" (UniqueName: \"kubernetes.io/projected/c8861453-5d14-406c-80f9-534ae729a8fe-kube-api-access-4z8hs\") pod \"whisker-674c968669-fd8l6\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " pod="calico-system/whisker-674c968669-fd8l6" Mar 4 01:04:33.255056 kubelet[2586]: I0304 01:04:33.253151 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aadc3f76-505a-444f-bf40-597198f092e8-config-volume\") pod \"coredns-7d764666f9-jtjj8\" (UID: \"aadc3f76-505a-444f-bf40-597198f092e8\") " pod="kube-system/coredns-7d764666f9-jtjj8" Mar 4 01:04:33.255056 kubelet[2586]: I0304 01:04:33.253185 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/544c68eb-a5e1-4a51-ad76-27bba7dba868-config\") pod \"goldmane-9f7667bb8-n4mqq\" (UID: \"544c68eb-a5e1-4a51-ad76-27bba7dba868\") " pod="calico-system/goldmane-9f7667bb8-n4mqq" Mar 4 01:04:33.255056 kubelet[2586]: I0304 01:04:33.253265 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2c4m\" (UniqueName: \"kubernetes.io/projected/ac58af22-43eb-4c31-9dcb-bdf8a11f443d-kube-api-access-r2c4m\") pod \"calico-kube-controllers-cd8cffdcd-9vxkx\" (UID: \"ac58af22-43eb-4c31-9dcb-bdf8a11f443d\") " pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" Mar 4 01:04:33.255056 kubelet[2586]: I0304 01:04:33.253303 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkv8\" (UniqueName: \"kubernetes.io/projected/33342b7d-98dc-47af-8bcb-0cd875c1acc0-kube-api-access-6jkv8\") pod \"coredns-7d764666f9-4r8bd\" (UID: \"33342b7d-98dc-47af-8bcb-0cd875c1acc0\") " pod="kube-system/coredns-7d764666f9-4r8bd" Mar 4 01:04:33.255275 kubelet[2586]: I0304 01:04:33.253367 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be51fa33-3cc1-4ec8-b66b-77876e60bb47-calico-apiserver-certs\") pod \"calico-apiserver-7d448765fc-92hmt\" (UID: \"be51fa33-3cc1-4ec8-b66b-77876e60bb47\") " pod="calico-system/calico-apiserver-7d448765fc-92hmt" Mar 4 01:04:33.255275 kubelet[2586]: I0304 01:04:33.253402 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9nz\" (UniqueName: \"kubernetes.io/projected/be51fa33-3cc1-4ec8-b66b-77876e60bb47-kube-api-access-5m9nz\") pod \"calico-apiserver-7d448765fc-92hmt\" (UID: \"be51fa33-3cc1-4ec8-b66b-77876e60bb47\") " pod="calico-system/calico-apiserver-7d448765fc-92hmt" Mar 4 01:04:33.255275 kubelet[2586]: I0304 01:04:33.253426 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/544c68eb-a5e1-4a51-ad76-27bba7dba868-goldmane-key-pair\") pod \"goldmane-9f7667bb8-n4mqq\" (UID: \"544c68eb-a5e1-4a51-ad76-27bba7dba868\") " pod="calico-system/goldmane-9f7667bb8-n4mqq" Mar 4 01:04:33.255707 systemd[1]: Created slice kubepods-besteffort-pod72e16b4b_2414_436d_9047_7c572057b1a0.slice - libcontainer container kubepods-besteffort-pod72e16b4b_2414_436d_9047_7c572057b1a0.slice. Mar 4 01:04:33.264996 systemd[1]: Created slice kubepods-besteffort-podc8861453_5d14_406c_80f9_534ae729a8fe.slice - libcontainer container kubepods-besteffort-podc8861453_5d14_406c_80f9_534ae729a8fe.slice. Mar 4 01:04:33.294720 systemd[1]: Created slice kubepods-besteffort-podbe51fa33_3cc1_4ec8_b66b_77876e60bb47.slice - libcontainer container kubepods-besteffort-podbe51fa33_3cc1_4ec8_b66b_77876e60bb47.slice. Mar 4 01:04:33.507742 kubelet[2586]: E0304 01:04:33.507198 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:33.508798 containerd[1494]: time="2026-03-04T01:04:33.508522571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtjj8,Uid:aadc3f76-505a-444f-bf40-597198f092e8,Namespace:kube-system,Attempt:0,}" Mar 4 01:04:33.521681 containerd[1494]: time="2026-03-04T01:04:33.521468950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8cffdcd-9vxkx,Uid:ac58af22-43eb-4c31-9dcb-bdf8a11f443d,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:33.530006 containerd[1494]: time="2026-03-04T01:04:33.529904844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-n4mqq,Uid:544c68eb-a5e1-4a51-ad76-27bba7dba868,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:33.549410 kubelet[2586]: E0304 01:04:33.549185 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:33.552882 containerd[1494]: time="2026-03-04T01:04:33.552693481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4r8bd,Uid:33342b7d-98dc-47af-8bcb-0cd875c1acc0,Namespace:kube-system,Attempt:0,}" Mar 4 01:04:33.560427 systemd[1]: Created slice kubepods-besteffort-pod70e9d987_0384_4b7c_aa94_bbc127680682.slice - libcontainer container kubepods-besteffort-pod70e9d987_0384_4b7c_aa94_bbc127680682.slice. Mar 4 01:04:33.566845 containerd[1494]: time="2026-03-04T01:04:33.566506912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-8j94r,Uid:72e16b4b-2414-436d-9047-7c572057b1a0,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:33.574990 containerd[1494]: time="2026-03-04T01:04:33.574744698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b8wzc,Uid:70e9d987-0384-4b7c-aa94-bbc127680682,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:33.583398 containerd[1494]: time="2026-03-04T01:04:33.583359369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674c968669-fd8l6,Uid:c8861453-5d14-406c-80f9-534ae729a8fe,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:33.606892 containerd[1494]: time="2026-03-04T01:04:33.606396646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-92hmt,Uid:be51fa33-3cc1-4ec8-b66b-77876e60bb47,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:33.838831 containerd[1494]: time="2026-03-04T01:04:33.838764785Z" level=error msg="Failed to destroy network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.840476 containerd[1494]: time="2026-03-04T01:04:33.840429822Z" level=error msg="encountered an error cleaning up failed sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.841046 containerd[1494]: time="2026-03-04T01:04:33.840767344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-n4mqq,Uid:544c68eb-a5e1-4a51-ad76-27bba7dba868,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.861658 kubelet[2586]: E0304 01:04:33.861018 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.861658 kubelet[2586]: E0304 01:04:33.861199 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-n4mqq" Mar 4 01:04:33.861658 kubelet[2586]: E0304 01:04:33.861283 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-n4mqq" Mar 4 01:04:33.861995 kubelet[2586]: E0304 01:04:33.861371 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-n4mqq_calico-system(544c68eb-a5e1-4a51-ad76-27bba7dba868)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-n4mqq_calico-system(544c68eb-a5e1-4a51-ad76-27bba7dba868)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-n4mqq" podUID="544c68eb-a5e1-4a51-ad76-27bba7dba868" Mar 4 01:04:33.866101 containerd[1494]: time="2026-03-04T01:04:33.866012129Z" level=error msg="Failed to destroy network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.867277 containerd[1494]: time="2026-03-04T01:04:33.867201055Z" level=error msg="encountered an error cleaning up failed sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.867445 containerd[1494]: time="2026-03-04T01:04:33.867323144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtjj8,Uid:aadc3f76-505a-444f-bf40-597198f092e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.867836 kubelet[2586]: E0304 01:04:33.867676 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.867836 kubelet[2586]: E0304 01:04:33.867741 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-jtjj8" Mar 4 01:04:33.867836 kubelet[2586]: E0304 01:04:33.867762 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-jtjj8" Mar 4 01:04:33.868081 kubelet[2586]: E0304 01:04:33.867826 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-jtjj8_kube-system(aadc3f76-505a-444f-bf40-597198f092e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-jtjj8_kube-system(aadc3f76-505a-444f-bf40-597198f092e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-jtjj8" podUID="aadc3f76-505a-444f-bf40-597198f092e8" Mar 4 01:04:33.878484 containerd[1494]: time="2026-03-04T01:04:33.878431164Z" level=error msg="Failed to destroy network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.881405 containerd[1494]: time="2026-03-04T01:04:33.881269349Z" level=error msg="encountered an error cleaning up failed sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.881405 containerd[1494]: time="2026-03-04T01:04:33.881336343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8cffdcd-9vxkx,Uid:ac58af22-43eb-4c31-9dcb-bdf8a11f443d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.882817 kubelet[2586]: E0304 01:04:33.882248 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.882817 kubelet[2586]: E0304 01:04:33.882368 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" Mar 4 01:04:33.882817 kubelet[2586]: E0304 01:04:33.882395 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" Mar 4 01:04:33.883158 kubelet[2586]: E0304 01:04:33.882458 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd8cffdcd-9vxkx_calico-system(ac58af22-43eb-4c31-9dcb-bdf8a11f443d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd8cffdcd-9vxkx_calico-system(ac58af22-43eb-4c31-9dcb-bdf8a11f443d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" podUID="ac58af22-43eb-4c31-9dcb-bdf8a11f443d" Mar 4 01:04:33.946913 containerd[1494]: time="2026-03-04T01:04:33.945512187Z" level=error msg="Failed to destroy network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.946913 containerd[1494]: time="2026-03-04T01:04:33.946416520Z" level=error msg="encountered an error cleaning up failed sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.946913 containerd[1494]: time="2026-03-04T01:04:33.946468427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4r8bd,Uid:33342b7d-98dc-47af-8bcb-0cd875c1acc0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.948848 kubelet[2586]: E0304 01:04:33.948682 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:33.948848 kubelet[2586]: E0304 01:04:33.948753 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-4r8bd" Mar 4 01:04:33.948848 kubelet[2586]: E0304 01:04:33.948780 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-4r8bd" Mar 4 01:04:33.951198 kubelet[2586]: E0304 01:04:33.948856 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-4r8bd_kube-system(33342b7d-98dc-47af-8bcb-0cd875c1acc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-4r8bd_kube-system(33342b7d-98dc-47af-8bcb-0cd875c1acc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-4r8bd" podUID="33342b7d-98dc-47af-8bcb-0cd875c1acc0" Mar 4 01:04:33.973635 kubelet[2586]: I0304 01:04:33.972490 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:04:34.008205 containerd[1494]: time="2026-03-04T01:04:34.008064695Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 01:04:34.008853 kubelet[2586]: I0304 01:04:34.008553 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:04:34.030680 containerd[1494]: time="2026-03-04T01:04:34.030172049Z" level=error msg="Failed to destroy network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.031328 containerd[1494]: time="2026-03-04T01:04:34.031117098Z" level=error msg="encountered an error cleaning up failed sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.031328 containerd[1494]: time="2026-03-04T01:04:34.031242694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674c968669-fd8l6,Uid:c8861453-5d14-406c-80f9-534ae729a8fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.034075 kubelet[2586]: E0304 01:04:34.033770 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.034075 kubelet[2586]: E0304 01:04:34.033826 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674c968669-fd8l6" Mar 4 01:04:34.034075 kubelet[2586]: E0304 01:04:34.033853 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674c968669-fd8l6" Mar 4 01:04:34.035412 kubelet[2586]: E0304 01:04:34.033913 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-674c968669-fd8l6_calico-system(c8861453-5d14-406c-80f9-534ae729a8fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-674c968669-fd8l6_calico-system(c8861453-5d14-406c-80f9-534ae729a8fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674c968669-fd8l6" podUID="c8861453-5d14-406c-80f9-534ae729a8fe" Mar 4 01:04:34.042669 containerd[1494]: time="2026-03-04T01:04:34.042312360Z" level=info msg="StopPodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\"" Mar 4 01:04:34.043196 containerd[1494]: time="2026-03-04T01:04:34.042929816Z" level=info msg="StopPodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\"" Mar 4 01:04:34.046075 containerd[1494]: time="2026-03-04T01:04:34.045741501Z" level=info msg="Ensure that sandbox 162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560 in task-service has been cleanup successfully" Mar 4 01:04:34.046758 containerd[1494]: time="2026-03-04T01:04:34.046474864Z" level=info msg="Ensure that sandbox f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084 in task-service has been cleanup successfully" Mar 4 01:04:34.055857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969-shm.mount: Deactivated successfully. Mar 4 01:04:34.071700 kubelet[2586]: I0304 01:04:34.071478 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:04:34.080113 containerd[1494]: time="2026-03-04T01:04:34.076919626Z" level=info msg="StopPodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\"" Mar 4 01:04:34.085175 containerd[1494]: time="2026-03-04T01:04:34.084755884Z" level=info msg="Ensure that sandbox 62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969 in task-service has been cleanup successfully" Mar 4 01:04:34.123708 containerd[1494]: time="2026-03-04T01:04:34.121785443Z" level=error msg="Failed to destroy network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.125928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a-shm.mount: Deactivated successfully. Mar 4 01:04:34.128666 containerd[1494]: time="2026-03-04T01:04:34.127206022Z" level=error msg="encountered an error cleaning up failed sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.128666 containerd[1494]: time="2026-03-04T01:04:34.127293266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-8j94r,Uid:72e16b4b-2414-436d-9047-7c572057b1a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.133748 kubelet[2586]: E0304 01:04:34.132033 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.133748 kubelet[2586]: E0304 01:04:34.132114 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d448765fc-8j94r" Mar 4 01:04:34.133748 kubelet[2586]: E0304 01:04:34.132140 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d448765fc-8j94r" Mar 4 01:04:34.133930 kubelet[2586]: E0304 01:04:34.132206 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d448765fc-8j94r_calico-system(72e16b4b-2414-436d-9047-7c572057b1a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d448765fc-8j94r_calico-system(72e16b4b-2414-436d-9047-7c572057b1a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d448765fc-8j94r" podUID="72e16b4b-2414-436d-9047-7c572057b1a0" Mar 4 01:04:34.144178 containerd[1494]: time="2026-03-04T01:04:34.144118450Z" level=error msg="Failed to destroy network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.149803 containerd[1494]: time="2026-03-04T01:04:34.149757408Z" level=error msg="Failed to destroy network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.153225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb-shm.mount: Deactivated successfully. Mar 4 01:04:34.163151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331550868.mount: Deactivated successfully. Mar 4 01:04:34.163292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c-shm.mount: Deactivated successfully. Mar 4 01:04:34.165696 containerd[1494]: time="2026-03-04T01:04:34.165528858Z" level=error msg="encountered an error cleaning up failed sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.165887 containerd[1494]: time="2026-03-04T01:04:34.165847635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-92hmt,Uid:be51fa33-3cc1-4ec8-b66b-77876e60bb47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.167185 containerd[1494]: time="2026-03-04T01:04:34.167060496Z" level=error msg="encountered an error cleaning up failed sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.167344 containerd[1494]: time="2026-03-04T01:04:34.167312569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b8wzc,Uid:70e9d987-0384-4b7c-aa94-bbc127680682,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.168041 kubelet[2586]: E0304 01:04:34.167787 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.168041 kubelet[2586]: E0304 01:04:34.167858 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d448765fc-92hmt" Mar 4 01:04:34.168041 kubelet[2586]: E0304 01:04:34.167883 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d448765fc-92hmt" Mar 4 01:04:34.168200 kubelet[2586]: E0304 01:04:34.168010 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d448765fc-92hmt_calico-system(be51fa33-3cc1-4ec8-b66b-77876e60bb47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d448765fc-92hmt_calico-system(be51fa33-3cc1-4ec8-b66b-77876e60bb47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d448765fc-92hmt" podUID="be51fa33-3cc1-4ec8-b66b-77876e60bb47" Mar 4 01:04:34.173281 kubelet[2586]: E0304 01:04:34.173103 2586 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.173281 kubelet[2586]: E0304 01:04:34.173158 2586 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:04:34.173281 kubelet[2586]: E0304 01:04:34.173183 2586 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b8wzc" Mar 4 01:04:34.173780 kubelet[2586]: E0304 01:04:34.173236 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b8wzc_calico-system(70e9d987-0384-4b7c-aa94-bbc127680682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b8wzc_calico-system(70e9d987-0384-4b7c-aa94-bbc127680682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b8wzc" podUID="70e9d987-0384-4b7c-aa94-bbc127680682" Mar 4 01:04:34.201382 containerd[1494]: time="2026-03-04T01:04:34.194425580Z" level=error msg="StopPodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" failed" error="failed to destroy network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.204168 kubelet[2586]: E0304 01:04:34.203924 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:04:34.204288 kubelet[2586]: E0304 01:04:34.204064 2586 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969"} Mar 4 01:04:34.204288 kubelet[2586]: E0304 01:04:34.204244 2586 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aadc3f76-505a-444f-bf40-597198f092e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:04:34.204288 kubelet[2586]: E0304 01:04:34.204274 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aadc3f76-505a-444f-bf40-597198f092e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-jtjj8" podUID="aadc3f76-505a-444f-bf40-597198f092e8" Mar 4 01:04:34.204768 containerd[1494]: time="2026-03-04T01:04:34.204689263Z" level=info msg="CreateContainer within sandbox \"3a87fe8fe0ea078b482b97ba3d8406c646599524f226809a43adff41f4c9edc6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc\"" Mar 4 01:04:34.207671 containerd[1494]: time="2026-03-04T01:04:34.207414226Z" level=info msg="StartContainer for \"647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc\"" Mar 4 01:04:34.239301 containerd[1494]: time="2026-03-04T01:04:34.239251341Z" level=error msg="StopPodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" failed" error="failed to destroy network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.241002 containerd[1494]: time="2026-03-04T01:04:34.240917221Z" level=error msg="StopPodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" failed" error="failed to destroy network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:04:34.241091 kubelet[2586]: E0304 01:04:34.240926 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:04:34.241138 kubelet[2586]: E0304 01:04:34.241089 2586 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084"} Mar 4 01:04:34.241190 kubelet[2586]: E0304 01:04:34.241140 2586 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac58af22-43eb-4c31-9dcb-bdf8a11f443d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:04:34.241331 kubelet[2586]: E0304 01:04:34.241181 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac58af22-43eb-4c31-9dcb-bdf8a11f443d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" podUID="ac58af22-43eb-4c31-9dcb-bdf8a11f443d" Mar 4 01:04:34.242690 kubelet[2586]: E0304 01:04:34.242387 2586 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:04:34.242690 kubelet[2586]: E0304 01:04:34.242495 2586 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560"} Mar 4 01:04:34.242690 kubelet[2586]: E0304 01:04:34.242534 2586 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"544c68eb-a5e1-4a51-ad76-27bba7dba868\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:04:34.242690 kubelet[2586]: E0304 01:04:34.242673 2586 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"544c68eb-a5e1-4a51-ad76-27bba7dba868\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-n4mqq" podUID="544c68eb-a5e1-4a51-ad76-27bba7dba868" Mar 4 01:04:34.266804 systemd[1]: Started cri-containerd-647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc.scope - libcontainer container 647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc. Mar 4 01:04:34.339189 containerd[1494]: time="2026-03-04T01:04:34.339003775Z" level=info msg="StartContainer for \"647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc\" returns successfully" Mar 4 01:04:35.077229 kubelet[2586]: I0304 01:04:35.076866 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:04:35.078104 containerd[1494]: time="2026-03-04T01:04:35.077863221Z" level=info msg="StopPodSandbox for \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\"" Mar 4 01:04:35.078204 containerd[1494]: time="2026-03-04T01:04:35.078151922Z" level=info msg="Ensure that sandbox 715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c in task-service has been cleanup successfully" Mar 4 01:04:35.094922 kubelet[2586]: I0304 01:04:35.091913 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:04:35.095145 containerd[1494]: time="2026-03-04T01:04:35.092839493Z" level=info msg="StopPodSandbox for \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\"" Mar 4 01:04:35.095145 containerd[1494]: time="2026-03-04T01:04:35.093096655Z" level=info msg="Ensure that sandbox 6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb in task-service has been cleanup successfully" Mar 4 01:04:35.099735 kubelet[2586]: I0304 01:04:35.099692 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:04:35.102113 containerd[1494]: time="2026-03-04T01:04:35.101767519Z" level=info msg="StopPodSandbox for \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\"" Mar 4 01:04:35.102113 containerd[1494]: time="2026-03-04T01:04:35.102039579Z" level=info msg="Ensure that sandbox d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704 in task-service has been cleanup successfully" Mar 4 01:04:35.105240 kubelet[2586]: I0304 01:04:35.104485 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:04:35.105305 containerd[1494]: time="2026-03-04T01:04:35.105028546Z" level=info msg="StopPodSandbox for \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\"" Mar 4 01:04:35.105305 containerd[1494]: time="2026-03-04T01:04:35.105152667Z" level=info msg="Ensure that sandbox e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55 in task-service has been cleanup successfully" Mar 4 01:04:35.108178 kubelet[2586]: I0304 01:04:35.108041 2586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:04:35.110138 containerd[1494]: time="2026-03-04T01:04:35.110109819Z" level=info msg="StopPodSandbox for \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\"" Mar 4 01:04:35.112852 containerd[1494]: time="2026-03-04T01:04:35.112829350Z" level=info msg="Ensure that sandbox 7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a in task-service has been cleanup successfully" Mar 4 01:04:35.122353 kubelet[2586]: I0304 01:04:35.122083 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-k48bj" podStartSLOduration=2.102266949 podStartE2EDuration="37.122005081s" podCreationTimestamp="2026-03-04 01:03:58 +0000 UTC" firstStartedPulling="2026-03-04 01:03:58.942287075 +0000 UTC m=+29.255379676" lastFinishedPulling="2026-03-04 01:04:33.962025207 +0000 UTC m=+64.275117808" observedRunningTime="2026-03-04 01:04:35.118267713 +0000 UTC m=+65.431360314" watchObservedRunningTime="2026-03-04 01:04:35.122005081 +0000 UTC m=+65.435097681" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.262 [INFO][3923] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.262 [INFO][3923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" iface="eth0" netns="/var/run/netns/cni-09c3da0b-fa8a-146d-6020-cf175e067ac1" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.264 [INFO][3923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" iface="eth0" netns="/var/run/netns/cni-09c3da0b-fa8a-146d-6020-cf175e067ac1" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.265 [INFO][3923] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" iface="eth0" netns="/var/run/netns/cni-09c3da0b-fa8a-146d-6020-cf175e067ac1" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.266 [INFO][3923] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.266 [INFO][3923] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.406 [INFO][3990] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.406 [INFO][3990] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.406 [INFO][3990] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.429 [WARNING][3990] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.429 [INFO][3990] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.433 [INFO][3990] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:35.449379 containerd[1494]: 2026-03-04 01:04:35.445 [INFO][3923] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:04:35.452058 containerd[1494]: time="2026-03-04T01:04:35.451990005Z" level=info msg="TearDown network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\" successfully" Mar 4 01:04:35.452058 containerd[1494]: time="2026-03-04T01:04:35.452029729Z" level=info msg="StopPodSandbox for \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\" returns successfully" Mar 4 01:04:35.455523 systemd[1]: run-netns-cni\x2d09c3da0b\x2dfa8a\x2d146d\x2d6020\x2dcf175e067ac1.mount: Deactivated successfully. Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.246 [INFO][3882] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.246 [INFO][3882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" iface="eth0" netns="/var/run/netns/cni-815f3f3e-7643-58dd-b2c5-f936d55ec550" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.257 [INFO][3882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" iface="eth0" netns="/var/run/netns/cni-815f3f3e-7643-58dd-b2c5-f936d55ec550" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.257 [INFO][3882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" iface="eth0" netns="/var/run/netns/cni-815f3f3e-7643-58dd-b2c5-f936d55ec550" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.257 [INFO][3882] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.257 [INFO][3882] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.407 [INFO][3988] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.408 [INFO][3988] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.433 [INFO][3988] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.448 [WARNING][3988] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.448 [INFO][3988] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.452 [INFO][3988] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:35.467063 containerd[1494]: 2026-03-04 01:04:35.460 [INFO][3882] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:04:35.468098 containerd[1494]: time="2026-03-04T01:04:35.467909486Z" level=info msg="TearDown network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\" successfully" Mar 4 01:04:35.468098 containerd[1494]: time="2026-03-04T01:04:35.468030332Z" level=info msg="StopPodSandbox for \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\" returns successfully" Mar 4 01:04:35.473268 systemd[1]: run-netns-cni\x2d815f3f3e\x2d7643\x2d58dd\x2db2c5\x2df936d55ec550.mount: Deactivated successfully. Mar 4 01:04:35.478439 containerd[1494]: time="2026-03-04T01:04:35.478372445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b8wzc,Uid:70e9d987-0384-4b7c-aa94-bbc127680682,Namespace:calico-system,Attempt:1,}" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.345 [INFO][3944] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.346 [INFO][3944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" iface="eth0" netns="/var/run/netns/cni-ca0a6bae-3845-c415-dcf6-2162066f74cd" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.347 [INFO][3944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" iface="eth0" netns="/var/run/netns/cni-ca0a6bae-3845-c415-dcf6-2162066f74cd" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.348 [INFO][3944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" iface="eth0" netns="/var/run/netns/cni-ca0a6bae-3845-c415-dcf6-2162066f74cd" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.349 [INFO][3944] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.349 [INFO][3944] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.409 [INFO][4015] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.410 [INFO][4015] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.453 [INFO][4015] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.472 [WARNING][4015] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.472 [INFO][4015] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.476 [INFO][4015] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:35.498544 containerd[1494]: 2026-03-04 01:04:35.486 [INFO][3944] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:04:35.504101 containerd[1494]: time="2026-03-04T01:04:35.503708746Z" level=info msg="TearDown network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\" successfully" Mar 4 01:04:35.504101 containerd[1494]: time="2026-03-04T01:04:35.503805077Z" level=info msg="StopPodSandbox for \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\" returns successfully" Mar 4 01:04:35.505926 systemd[1]: run-netns-cni\x2dca0a6bae\x2d3845\x2dc415\x2ddcf6\x2d2162066f74cd.mount: Deactivated successfully. Mar 4 01:04:35.508443 containerd[1494]: time="2026-03-04T01:04:35.508301906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-8j94r,Uid:72e16b4b-2414-436d-9047-7c572057b1a0,Namespace:calico-system,Attempt:1,}" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.271 [INFO][3929] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.273 [INFO][3929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" iface="eth0" netns="/var/run/netns/cni-49c79101-6523-a466-b01f-90dae200e325" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.274 [INFO][3929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" iface="eth0" netns="/var/run/netns/cni-49c79101-6523-a466-b01f-90dae200e325" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.275 [INFO][3929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" iface="eth0" netns="/var/run/netns/cni-49c79101-6523-a466-b01f-90dae200e325" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.275 [INFO][3929] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.275 [INFO][3929] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.415 [INFO][4003] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.416 [INFO][4003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.476 [INFO][4003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.502 [WARNING][4003] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.502 [INFO][4003] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.521 [INFO][4003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:35.554729 containerd[1494]: 2026-03-04 01:04:35.539 [INFO][3929] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:04:35.555554 containerd[1494]: time="2026-03-04T01:04:35.555318919Z" level=info msg="TearDown network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\" successfully" Mar 4 01:04:35.555554 containerd[1494]: time="2026-03-04T01:04:35.555356079Z" level=info msg="StopPodSandbox for \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\" returns successfully" Mar 4 01:04:35.565905 containerd[1494]: time="2026-03-04T01:04:35.565274301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-92hmt,Uid:be51fa33-3cc1-4ec8-b66b-77876e60bb47,Namespace:calico-system,Attempt:1,}" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.253 [INFO][3932] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.253 [INFO][3932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" iface="eth0" netns="/var/run/netns/cni-5877277e-6ef3-b281-d300-703995aa1335" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.254 [INFO][3932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" iface="eth0" netns="/var/run/netns/cni-5877277e-6ef3-b281-d300-703995aa1335" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.256 [INFO][3932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" iface="eth0" netns="/var/run/netns/cni-5877277e-6ef3-b281-d300-703995aa1335" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.256 [INFO][3932] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.257 [INFO][3932] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.420 [INFO][3986] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.420 [INFO][3986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.521 [INFO][3986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.539 [WARNING][3986] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.539 [INFO][3986] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.549 [INFO][3986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:35.575434 containerd[1494]: 2026-03-04 01:04:35.571 [INFO][3932] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:04:35.576311 containerd[1494]: time="2026-03-04T01:04:35.575839643Z" level=info msg="TearDown network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\" successfully" Mar 4 01:04:35.576311 containerd[1494]: time="2026-03-04T01:04:35.575878244Z" level=info msg="StopPodSandbox for \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\" returns successfully" Mar 4 01:04:35.583716 kubelet[2586]: E0304 01:04:35.583445 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:35.586439 containerd[1494]: time="2026-03-04T01:04:35.586398671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4r8bd,Uid:33342b7d-98dc-47af-8bcb-0cd875c1acc0,Namespace:kube-system,Attempt:1,}" Mar 4 01:04:35.599277 kubelet[2586]: I0304 01:04:35.598436 2586 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-nginx-config\" (UniqueName: \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-nginx-config\") pod \"c8861453-5d14-406c-80f9-534ae729a8fe\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " Mar 4 01:04:35.599277 kubelet[2586]: I0304 01:04:35.598489 2586 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-backend-key-pair\") pod \"c8861453-5d14-406c-80f9-534ae729a8fe\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " Mar 4 01:04:35.599277 kubelet[2586]: I0304 01:04:35.598508 2586 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-ca-bundle\") pod \"c8861453-5d14-406c-80f9-534ae729a8fe\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " Mar 4 01:04:35.599277 kubelet[2586]: I0304 01:04:35.598527 2586 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c8861453-5d14-406c-80f9-534ae729a8fe-kube-api-access-4z8hs\" (UniqueName: \"kubernetes.io/projected/c8861453-5d14-406c-80f9-534ae729a8fe-kube-api-access-4z8hs\") pod \"c8861453-5d14-406c-80f9-534ae729a8fe\" (UID: \"c8861453-5d14-406c-80f9-534ae729a8fe\") " Mar 4 01:04:35.602685 kubelet[2586]: I0304 01:04:35.601870 2586 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-nginx-config" pod "c8861453-5d14-406c-80f9-534ae729a8fe" (UID: "c8861453-5d14-406c-80f9-534ae729a8fe"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:04:35.602685 kubelet[2586]: I0304 01:04:35.602387 2586 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-ca-bundle" pod "c8861453-5d14-406c-80f9-534ae729a8fe" (UID: "c8861453-5d14-406c-80f9-534ae729a8fe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:04:35.609482 kubelet[2586]: I0304 01:04:35.609443 2586 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-backend-key-pair" pod "c8861453-5d14-406c-80f9-534ae729a8fe" (UID: "c8861453-5d14-406c-80f9-534ae729a8fe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:04:35.613474 kubelet[2586]: I0304 01:04:35.613412 2586 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8861453-5d14-406c-80f9-534ae729a8fe-kube-api-access-4z8hs" pod "c8861453-5d14-406c-80f9-534ae729a8fe" (UID: "c8861453-5d14-406c-80f9-534ae729a8fe"). InnerVolumeSpecName "kube-api-access-4z8hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:04:35.700206 kubelet[2586]: I0304 01:04:35.699908 2586 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 4 01:04:35.701020 kubelet[2586]: I0304 01:04:35.700874 2586 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 4 01:04:35.701020 kubelet[2586]: I0304 01:04:35.700914 2586 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8861453-5d14-406c-80f9-534ae729a8fe-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 4 01:04:35.701020 kubelet[2586]: I0304 01:04:35.700932 2586 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4z8hs\" (UniqueName: \"kubernetes.io/projected/c8861453-5d14-406c-80f9-534ae729a8fe-kube-api-access-4z8hs\") on node \"localhost\" DevicePath \"\"" Mar 4 01:04:35.971208 systemd-networkd[1407]: cali4b789a1bc6a: Link UP Mar 4 01:04:35.972689 systemd-networkd[1407]: cali4b789a1bc6a: Gained carrier Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.616 [ERROR][4030] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.707 [INFO][4030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b8wzc-eth0 csi-node-driver- calico-system 70e9d987-0384-4b7c-aa94-bbc127680682 1007 0 2026-03-04 01:03:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b8wzc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b789a1bc6a [] [] }} ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.708 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.850 [INFO][4087] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" HandleID="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.864 [INFO][4087] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" HandleID="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b8wzc", "timestamp":"2026-03-04 01:04:35.85015697 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000594160)} Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.865 [INFO][4087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.865 [INFO][4087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.865 [INFO][4087] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.873 [INFO][4087] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.889 [INFO][4087] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.917 [INFO][4087] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.921 [INFO][4087] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.927 [INFO][4087] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.927 [INFO][4087] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.931 [INFO][4087] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553 Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.938 [INFO][4087] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.951 [INFO][4087] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.951 [INFO][4087] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" host="localhost" Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.951 [INFO][4087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:36.015744 containerd[1494]: 2026-03-04 01:04:35.951 [INFO][4087] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" HandleID="k8s-pod-network.38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.017112 containerd[1494]: 2026-03-04 01:04:35.956 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b8wzc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70e9d987-0384-4b7c-aa94-bbc127680682", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b8wzc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b789a1bc6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.017112 containerd[1494]: 2026-03-04 01:04:35.956 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.017112 containerd[1494]: 2026-03-04 01:04:35.957 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b789a1bc6a ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.017112 containerd[1494]: 2026-03-04 01:04:35.972 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.017112 containerd[1494]: 2026-03-04 01:04:35.975 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b8wzc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70e9d987-0384-4b7c-aa94-bbc127680682", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553", Pod:"csi-node-driver-b8wzc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b789a1bc6a", MAC:"7a:35:6b:b4:00:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.017112 containerd[1494]: 2026-03-04 01:04:36.010 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553" Namespace="calico-system" Pod="csi-node-driver-b8wzc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:04:36.062456 systemd[1]: run-netns-cni\x2d49c79101\x2d6523\x2da466\x2db01f\x2d90dae200e325.mount: Deactivated successfully. Mar 4 01:04:36.063208 systemd[1]: run-netns-cni\x2d5877277e\x2d6ef3\x2db281\x2dd300\x2d703995aa1335.mount: Deactivated successfully. Mar 4 01:04:36.063320 systemd[1]: var-lib-kubelet-pods-c8861453\x2d5d14\x2d406c\x2d80f9\x2d534ae729a8fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4z8hs.mount: Deactivated successfully. Mar 4 01:04:36.063428 systemd[1]: var-lib-kubelet-pods-c8861453\x2d5d14\x2d406c\x2d80f9\x2d534ae729a8fe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 01:04:36.078710 containerd[1494]: time="2026-03-04T01:04:36.076814737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:36.078710 containerd[1494]: time="2026-03-04T01:04:36.077177356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:36.078710 containerd[1494]: time="2026-03-04T01:04:36.077197834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.078710 containerd[1494]: time="2026-03-04T01:04:36.077312399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.133805 systemd[1]: Removed slice kubepods-besteffort-podc8861453_5d14_406c_80f9_534ae729a8fe.slice - libcontainer container kubepods-besteffort-podc8861453_5d14_406c_80f9_534ae729a8fe.slice. Mar 4 01:04:36.194884 systemd[1]: Started cri-containerd-38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553.scope - libcontainer container 38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553. Mar 4 01:04:36.221069 systemd[1]: run-containerd-runc-k8s.io-647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc-runc.Pvn3fP.mount: Deactivated successfully. Mar 4 01:04:36.238880 systemd-networkd[1407]: caliebbf1b98ed9: Link UP Mar 4 01:04:36.240056 systemd-networkd[1407]: caliebbf1b98ed9: Gained carrier Mar 4 01:04:36.292816 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.677 [ERROR][4040] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.735 [INFO][4040] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0 calico-apiserver-7d448765fc- calico-system 72e16b4b-2414-436d-9047-7c572057b1a0 1010 0 2026-03-04 01:03:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d448765fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d448765fc-8j94r eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliebbf1b98ed9 [] [] }} ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.735 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.852 [INFO][4098] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" HandleID="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.866 [INFO][4098] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" HandleID="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138fa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7d448765fc-8j94r", "timestamp":"2026-03-04 01:04:35.852072404 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042d600)} Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.867 [INFO][4098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.951 [INFO][4098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.952 [INFO][4098] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:35.981 [INFO][4098] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.010 [INFO][4098] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.028 [INFO][4098] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.032 [INFO][4098] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.038 [INFO][4098] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.038 [INFO][4098] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.044 [INFO][4098] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651 Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.060 [INFO][4098] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.098 [INFO][4098] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.103 [INFO][4098] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" host="localhost" Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.128 [INFO][4098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:36.308475 containerd[1494]: 2026-03-04 01:04:36.131 [INFO][4098] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" HandleID="k8s-pod-network.28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.309484 containerd[1494]: 2026-03-04 01:04:36.172 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"72e16b4b-2414-436d-9047-7c572057b1a0", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d448765fc-8j94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliebbf1b98ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.309484 containerd[1494]: 2026-03-04 01:04:36.172 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.309484 containerd[1494]: 2026-03-04 01:04:36.172 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebbf1b98ed9 ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.309484 containerd[1494]: 2026-03-04 01:04:36.240 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.309484 containerd[1494]: 2026-03-04 01:04:36.245 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"72e16b4b-2414-436d-9047-7c572057b1a0", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651", Pod:"calico-apiserver-7d448765fc-8j94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliebbf1b98ed9", MAC:"c6:c4:e9:d7:26:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.309484 containerd[1494]: 2026-03-04 01:04:36.270 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-8j94r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:04:36.327674 systemd[1]: Created slice kubepods-besteffort-pod34834c4f_275a_421b_bfd8_88b14c50b403.slice - libcontainer container kubepods-besteffort-pod34834c4f_275a_421b_bfd8_88b14c50b403.slice. Mar 4 01:04:36.375835 systemd-networkd[1407]: cali7b90fa9a69e: Link UP Mar 4 01:04:36.377328 systemd-networkd[1407]: cali7b90fa9a69e: Gained carrier Mar 4 01:04:36.401853 containerd[1494]: time="2026-03-04T01:04:36.401739522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b8wzc,Uid:70e9d987-0384-4b7c-aa94-bbc127680682,Namespace:calico-system,Attempt:1,} returns sandbox id \"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553\"" Mar 4 01:04:36.413804 containerd[1494]: time="2026-03-04T01:04:36.411147826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 01:04:36.414235 kubelet[2586]: I0304 01:04:36.414085 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/34834c4f-275a-421b-bfd8-88b14c50b403-nginx-config\") pod \"whisker-758dfd4f4b-72nsg\" (UID: \"34834c4f-275a-421b-bfd8-88b14c50b403\") " pod="calico-system/whisker-758dfd4f4b-72nsg" Mar 4 01:04:36.414235 kubelet[2586]: I0304 01:04:36.414196 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34834c4f-275a-421b-bfd8-88b14c50b403-whisker-backend-key-pair\") pod \"whisker-758dfd4f4b-72nsg\" (UID: \"34834c4f-275a-421b-bfd8-88b14c50b403\") " pod="calico-system/whisker-758dfd4f4b-72nsg" Mar 4 01:04:36.414235 kubelet[2586]: I0304 01:04:36.414229 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tk58\" (UniqueName: \"kubernetes.io/projected/34834c4f-275a-421b-bfd8-88b14c50b403-kube-api-access-2tk58\") pod \"whisker-758dfd4f4b-72nsg\" (UID: \"34834c4f-275a-421b-bfd8-88b14c50b403\") " pod="calico-system/whisker-758dfd4f4b-72nsg" Mar 4 01:04:36.417186 kubelet[2586]: I0304 01:04:36.414256 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34834c4f-275a-421b-bfd8-88b14c50b403-whisker-ca-bundle\") pod \"whisker-758dfd4f4b-72nsg\" (UID: \"34834c4f-275a-421b-bfd8-88b14c50b403\") " pod="calico-system/whisker-758dfd4f4b-72nsg" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:35.716 [ERROR][4067] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:35.737 [INFO][4067] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--4r8bd-eth0 coredns-7d764666f9- kube-system 33342b7d-98dc-47af-8bcb-0cd875c1acc0 1006 0 2026-03-04 01:03:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-4r8bd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b90fa9a69e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:35.737 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:35.874 [INFO][4096] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" HandleID="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:35.903 [INFO][4096] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" HandleID="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fcde0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-4r8bd", "timestamp":"2026-03-04 01:04:35.874027496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001982c0)} Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:35.903 [INFO][4096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.125 [INFO][4096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.125 [INFO][4096] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.158 [INFO][4096] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.225 [INFO][4096] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.255 [INFO][4096] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.271 [INFO][4096] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.277 [INFO][4096] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.277 [INFO][4096] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.281 [INFO][4096] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.301 [INFO][4096] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.322 [INFO][4096] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.323 [INFO][4096] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" host="localhost" Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.324 [INFO][4096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:36.444788 containerd[1494]: 2026-03-04 01:04:36.324 [INFO][4096] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" HandleID="k8s-pod-network.7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.446173 containerd[1494]: 2026-03-04 01:04:36.343 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4r8bd-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"33342b7d-98dc-47af-8bcb-0cd875c1acc0", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-4r8bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b90fa9a69e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.446173 containerd[1494]: 2026-03-04 01:04:36.347 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.446173 containerd[1494]: 2026-03-04 01:04:36.364 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b90fa9a69e ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.446173 containerd[1494]: 2026-03-04 01:04:36.378 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.446173 containerd[1494]: 2026-03-04 01:04:36.379 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4r8bd-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"33342b7d-98dc-47af-8bcb-0cd875c1acc0", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f", Pod:"coredns-7d764666f9-4r8bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b90fa9a69e", MAC:"4a:f6:4f:3d:f2:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.446173 containerd[1494]: 2026-03-04 01:04:36.418 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f" Namespace="kube-system" Pod="coredns-7d764666f9-4r8bd" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:04:36.525787 containerd[1494]: time="2026-03-04T01:04:36.505551225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:36.525787 containerd[1494]: time="2026-03-04T01:04:36.521705551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:36.525787 containerd[1494]: time="2026-03-04T01:04:36.522549430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.529753 containerd[1494]: time="2026-03-04T01:04:36.528030432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.567889 kubelet[2586]: I0304 01:04:36.567710 2586 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c8861453-5d14-406c-80f9-534ae729a8fe" path="/var/lib/kubelet/pods/c8861453-5d14-406c-80f9-534ae729a8fe/volumes" Mar 4 01:04:36.585890 systemd-networkd[1407]: calib5c16ead2af: Link UP Mar 4 01:04:36.612111 systemd-networkd[1407]: calib5c16ead2af: Gained carrier Mar 4 01:04:36.639289 containerd[1494]: time="2026-03-04T01:04:36.637488057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:36.639289 containerd[1494]: time="2026-03-04T01:04:36.637684384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:36.639289 containerd[1494]: time="2026-03-04T01:04:36.637705924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.639289 containerd[1494]: time="2026-03-04T01:04:36.637874370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.656244 systemd[1]: Started cri-containerd-28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651.scope - libcontainer container 28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651. Mar 4 01:04:36.659430 containerd[1494]: time="2026-03-04T01:04:36.659353131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-758dfd4f4b-72nsg,Uid:34834c4f-275a-421b-bfd8-88b14c50b403,Namespace:calico-system,Attempt:0,}" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:35.701 [ERROR][4058] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:35.738 [INFO][4058] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0 calico-apiserver-7d448765fc- calico-system be51fa33-3cc1-4ec8-b66b-77876e60bb47 1009 0 2026-03-04 01:03:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d448765fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d448765fc-92hmt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib5c16ead2af [] [] }} ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:35.738 [INFO][4058] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:35.873 [INFO][4095] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" HandleID="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:35.917 [INFO][4095] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" HandleID="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000586370), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7d448765fc-92hmt", "timestamp":"2026-03-04 01:04:35.873411681 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000530f20)} Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:35.917 [INFO][4095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.324 [INFO][4095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.325 [INFO][4095] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.340 [INFO][4095] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.370 [INFO][4095] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.409 [INFO][4095] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.437 [INFO][4095] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.459 [INFO][4095] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.459 [INFO][4095] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.472 [INFO][4095] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4 Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.496 [INFO][4095] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.508 [INFO][4095] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.508 [INFO][4095] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" host="localhost" Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.508 [INFO][4095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:36.675536 containerd[1494]: 2026-03-04 01:04:36.508 [INFO][4095] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" HandleID="k8s-pod-network.c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.677189 containerd[1494]: 2026-03-04 01:04:36.562 [INFO][4058] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"be51fa33-3cc1-4ec8-b66b-77876e60bb47", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d448765fc-92hmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib5c16ead2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.677189 containerd[1494]: 2026-03-04 01:04:36.562 [INFO][4058] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.677189 containerd[1494]: 2026-03-04 01:04:36.562 [INFO][4058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5c16ead2af ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.677189 containerd[1494]: 2026-03-04 01:04:36.616 [INFO][4058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.677189 containerd[1494]: 2026-03-04 01:04:36.621 [INFO][4058] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"be51fa33-3cc1-4ec8-b66b-77876e60bb47", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4", Pod:"calico-apiserver-7d448765fc-92hmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib5c16ead2af", MAC:"72:05:f3:de:08:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:36.677189 containerd[1494]: 2026-03-04 01:04:36.668 [INFO][4058] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4" Namespace="calico-system" Pod="calico-apiserver-7d448765fc-92hmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:04:36.717854 systemd[1]: Started cri-containerd-7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f.scope - libcontainer container 7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f. Mar 4 01:04:36.767046 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:36.796751 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:36.813520 containerd[1494]: time="2026-03-04T01:04:36.813452614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:36.814237 containerd[1494]: time="2026-03-04T01:04:36.814117117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:36.814708 containerd[1494]: time="2026-03-04T01:04:36.814376344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.815112 containerd[1494]: time="2026-03-04T01:04:36.814918078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:36.877119 systemd[1]: Started cri-containerd-c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4.scope - libcontainer container c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4. Mar 4 01:04:36.909035 containerd[1494]: time="2026-03-04T01:04:36.908932512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4r8bd,Uid:33342b7d-98dc-47af-8bcb-0cd875c1acc0,Namespace:kube-system,Attempt:1,} returns sandbox id \"7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f\"" Mar 4 01:04:36.912063 kubelet[2586]: E0304 01:04:36.911229 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:36.926032 containerd[1494]: time="2026-03-04T01:04:36.925177137Z" level=info msg="CreateContainer within sandbox \"7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:04:37.000198 containerd[1494]: time="2026-03-04T01:04:36.998183046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-8j94r,Uid:72e16b4b-2414-436d-9047-7c572057b1a0,Namespace:calico-system,Attempt:1,} returns sandbox id \"28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651\"" Mar 4 01:04:37.003721 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:37.035027 containerd[1494]: time="2026-03-04T01:04:37.034882067Z" level=info msg="CreateContainer within sandbox \"7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a32df6190af60d272763870692e4b48a1ae3dec8b745833ff6834ae1a57e8f6\"" Mar 4 01:04:37.038698 containerd[1494]: time="2026-03-04T01:04:37.037708013Z" level=info msg="StartContainer for \"8a32df6190af60d272763870692e4b48a1ae3dec8b745833ff6834ae1a57e8f6\"" Mar 4 01:04:37.182245 containerd[1494]: time="2026-03-04T01:04:37.182098876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d448765fc-92hmt,Uid:be51fa33-3cc1-4ec8-b66b-77876e60bb47,Namespace:calico-system,Attempt:1,} returns sandbox id \"c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4\"" Mar 4 01:04:37.256177 systemd[1]: Started cri-containerd-8a32df6190af60d272763870692e4b48a1ae3dec8b745833ff6834ae1a57e8f6.scope - libcontainer container 8a32df6190af60d272763870692e4b48a1ae3dec8b745833ff6834ae1a57e8f6. Mar 4 01:04:37.379546 systemd[1]: run-containerd-runc-k8s.io-647f68b1dc00e59a5974441724d63f5c87d6b2776f71a78885ed397ba3fe45bc-runc.1DxKDg.mount: Deactivated successfully. Mar 4 01:04:37.407839 kernel: calico-node[4326]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 01:04:37.416022 containerd[1494]: time="2026-03-04T01:04:37.415099132Z" level=info msg="StartContainer for \"8a32df6190af60d272763870692e4b48a1ae3dec8b745833ff6834ae1a57e8f6\" returns successfully" Mar 4 01:04:37.424922 systemd-networkd[1407]: cali345e968cad4: Link UP Mar 4 01:04:37.430232 systemd-networkd[1407]: cali345e968cad4: Gained carrier Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:36.946 [ERROR][4389] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.026 [INFO][4389] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--758dfd4f4b--72nsg-eth0 whisker-758dfd4f4b- calico-system 34834c4f-275a-421b-bfd8-88b14c50b403 1046 0 2026-03-04 01:04:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:758dfd4f4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-758dfd4f4b-72nsg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali345e968cad4 [] [] }} ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.027 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.204 [INFO][4464] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" HandleID="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Workload="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.224 [INFO][4464] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" HandleID="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Workload="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022d7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-758dfd4f4b-72nsg", "timestamp":"2026-03-04 01:04:37.204146006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000b3a20)} Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.224 [INFO][4464] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.224 [INFO][4464] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.224 [INFO][4464] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.237 [INFO][4464] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.267 [INFO][4464] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.316 [INFO][4464] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.321 [INFO][4464] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.328 [INFO][4464] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.328 [INFO][4464] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.333 [INFO][4464] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64 Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.349 [INFO][4464] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.384 [INFO][4464] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.384 [INFO][4464] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" host="localhost" Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.384 [INFO][4464] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:37.541510 containerd[1494]: 2026-03-04 01:04:37.384 [INFO][4464] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" HandleID="k8s-pod-network.820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Workload="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.543468 containerd[1494]: 2026-03-04 01:04:37.398 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--758dfd4f4b--72nsg-eth0", GenerateName:"whisker-758dfd4f4b-", Namespace:"calico-system", SelfLink:"", UID:"34834c4f-275a-421b-bfd8-88b14c50b403", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"758dfd4f4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-758dfd4f4b-72nsg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali345e968cad4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:37.543468 containerd[1494]: 2026-03-04 01:04:37.398 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.543468 containerd[1494]: 2026-03-04 01:04:37.398 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali345e968cad4 ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.543468 containerd[1494]: 2026-03-04 01:04:37.443 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.543468 containerd[1494]: 2026-03-04 01:04:37.444 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--758dfd4f4b--72nsg-eth0", GenerateName:"whisker-758dfd4f4b-", Namespace:"calico-system", SelfLink:"", UID:"34834c4f-275a-421b-bfd8-88b14c50b403", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"758dfd4f4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64", Pod:"whisker-758dfd4f4b-72nsg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali345e968cad4", MAC:"82:6b:40:7c:b6:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:37.543468 containerd[1494]: 2026-03-04 01:04:37.507 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64" Namespace="calico-system" Pod="whisker-758dfd4f4b-72nsg" WorkloadEndpoint="localhost-k8s-whisker--758dfd4f4b--72nsg-eth0" Mar 4 01:04:37.673556 containerd[1494]: time="2026-03-04T01:04:37.671841200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:37.673556 containerd[1494]: time="2026-03-04T01:04:37.671935547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:37.673556 containerd[1494]: time="2026-03-04T01:04:37.671950295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:37.673556 containerd[1494]: time="2026-03-04T01:04:37.672172891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:37.768940 systemd[1]: Started cri-containerd-820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64.scope - libcontainer container 820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64. Mar 4 01:04:37.840137 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:37.880926 systemd-networkd[1407]: cali4b789a1bc6a: Gained IPv6LL Mar 4 01:04:37.903922 containerd[1494]: time="2026-03-04T01:04:37.903817178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-758dfd4f4b-72nsg,Uid:34834c4f-275a-421b-bfd8-88b14c50b403,Namespace:calico-system,Attempt:0,} returns sandbox id \"820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64\"" Mar 4 01:04:38.009369 containerd[1494]: time="2026-03-04T01:04:38.009257680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:38.011640 containerd[1494]: time="2026-03-04T01:04:38.011388919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 4 01:04:38.013078 containerd[1494]: time="2026-03-04T01:04:38.013038898Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:38.017773 containerd[1494]: time="2026-03-04T01:04:38.017650225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:38.020643 containerd[1494]: time="2026-03-04T01:04:38.018328916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.607140493s" Mar 4 01:04:38.020643 containerd[1494]: time="2026-03-04T01:04:38.018369211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 4 01:04:38.022347 containerd[1494]: time="2026-03-04T01:04:38.022270415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:04:38.031790 containerd[1494]: time="2026-03-04T01:04:38.031720356Z" level=info msg="CreateContainer within sandbox \"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 01:04:38.069243 containerd[1494]: time="2026-03-04T01:04:38.064962217Z" level=info msg="CreateContainer within sandbox \"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a1644f2134719456839b2d06de9d4f97ce15f34a1abc9b8c0a568039ea0fba27\"" Mar 4 01:04:38.069243 containerd[1494]: time="2026-03-04T01:04:38.067712606Z" level=info msg="StartContainer for \"a1644f2134719456839b2d06de9d4f97ce15f34a1abc9b8c0a568039ea0fba27\"" Mar 4 01:04:38.074508 systemd-networkd[1407]: caliebbf1b98ed9: Gained IPv6LL Mar 4 01:04:38.153824 systemd[1]: Started cri-containerd-a1644f2134719456839b2d06de9d4f97ce15f34a1abc9b8c0a568039ea0fba27.scope - libcontainer container a1644f2134719456839b2d06de9d4f97ce15f34a1abc9b8c0a568039ea0fba27. Mar 4 01:04:38.158406 kubelet[2586]: E0304 01:04:38.157426 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:38.234645 kubelet[2586]: I0304 01:04:38.233442 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-4r8bd" podStartSLOduration=63.233419683 podStartE2EDuration="1m3.233419683s" podCreationTimestamp="2026-03-04 01:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:04:38.199689958 +0000 UTC m=+68.512782590" watchObservedRunningTime="2026-03-04 01:04:38.233419683 +0000 UTC m=+68.546512304" Mar 4 01:04:38.262757 containerd[1494]: time="2026-03-04T01:04:38.262505506Z" level=info msg="StartContainer for \"a1644f2134719456839b2d06de9d4f97ce15f34a1abc9b8c0a568039ea0fba27\" returns successfully" Mar 4 01:04:38.395830 systemd-networkd[1407]: cali7b90fa9a69e: Gained IPv6LL Mar 4 01:04:38.521901 systemd-networkd[1407]: calib5c16ead2af: Gained IPv6LL Mar 4 01:04:38.547534 kubelet[2586]: E0304 01:04:38.546451 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:38.567025 systemd-networkd[1407]: vxlan.calico: Link UP Mar 4 01:04:38.567038 systemd-networkd[1407]: vxlan.calico: Gained carrier Mar 4 01:04:39.169860 kubelet[2586]: E0304 01:04:39.169790 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:39.290783 systemd-networkd[1407]: cali345e968cad4: Gained IPv6LL Mar 4 01:04:39.617686 containerd[1494]: time="2026-03-04T01:04:39.617516938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:39.618916 containerd[1494]: time="2026-03-04T01:04:39.618858442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 4 01:04:39.620319 containerd[1494]: time="2026-03-04T01:04:39.620261045Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:39.627296 containerd[1494]: time="2026-03-04T01:04:39.627232576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:39.628067 containerd[1494]: time="2026-03-04T01:04:39.627931797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.605593044s" Mar 4 01:04:39.628067 containerd[1494]: time="2026-03-04T01:04:39.628022697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:04:39.629643 containerd[1494]: time="2026-03-04T01:04:39.629545891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:04:39.635211 containerd[1494]: time="2026-03-04T01:04:39.635164285Z" level=info msg="CreateContainer within sandbox \"28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:04:39.653422 containerd[1494]: time="2026-03-04T01:04:39.653358384Z" level=info msg="CreateContainer within sandbox \"28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a57ceaf31dbfcf70a2617c9e6d360dbaf77dda98b788b37bba79bcfcac1c3e7b\"" Mar 4 01:04:39.654615 containerd[1494]: time="2026-03-04T01:04:39.654148776Z" level=info msg="StartContainer for \"a57ceaf31dbfcf70a2617c9e6d360dbaf77dda98b788b37bba79bcfcac1c3e7b\"" Mar 4 01:04:39.692277 systemd[1]: run-containerd-runc-k8s.io-a57ceaf31dbfcf70a2617c9e6d360dbaf77dda98b788b37bba79bcfcac1c3e7b-runc.cObq5P.mount: Deactivated successfully. Mar 4 01:04:39.701785 systemd[1]: Started cri-containerd-a57ceaf31dbfcf70a2617c9e6d360dbaf77dda98b788b37bba79bcfcac1c3e7b.scope - libcontainer container a57ceaf31dbfcf70a2617c9e6d360dbaf77dda98b788b37bba79bcfcac1c3e7b. Mar 4 01:04:39.732281 containerd[1494]: time="2026-03-04T01:04:39.730928849Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:39.732659 containerd[1494]: time="2026-03-04T01:04:39.732546948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 01:04:39.736123 containerd[1494]: time="2026-03-04T01:04:39.735878485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 106.222016ms" Mar 4 01:04:39.736123 containerd[1494]: time="2026-03-04T01:04:39.735918640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:04:39.740087 containerd[1494]: time="2026-03-04T01:04:39.739411901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 01:04:39.752160 containerd[1494]: time="2026-03-04T01:04:39.752102067Z" level=info msg="CreateContainer within sandbox \"c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:04:39.784042 containerd[1494]: time="2026-03-04T01:04:39.783899788Z" level=info msg="StartContainer for \"a57ceaf31dbfcf70a2617c9e6d360dbaf77dda98b788b37bba79bcfcac1c3e7b\" returns successfully" Mar 4 01:04:39.785110 containerd[1494]: time="2026-03-04T01:04:39.784902374Z" level=info msg="CreateContainer within sandbox \"c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4fb6cfd9fdc32c27c7049105aeae85fdd7bab26b3283c3ec66ca986681d3ecc\"" Mar 4 01:04:39.787258 containerd[1494]: time="2026-03-04T01:04:39.787067269Z" level=info msg="StartContainer for \"f4fb6cfd9fdc32c27c7049105aeae85fdd7bab26b3283c3ec66ca986681d3ecc\"" Mar 4 01:04:39.833946 systemd[1]: Started cri-containerd-f4fb6cfd9fdc32c27c7049105aeae85fdd7bab26b3283c3ec66ca986681d3ecc.scope - libcontainer container f4fb6cfd9fdc32c27c7049105aeae85fdd7bab26b3283c3ec66ca986681d3ecc. Mar 4 01:04:39.896705 containerd[1494]: time="2026-03-04T01:04:39.895247000Z" level=info msg="StartContainer for \"f4fb6cfd9fdc32c27c7049105aeae85fdd7bab26b3283c3ec66ca986681d3ecc\" returns successfully" Mar 4 01:04:40.206456 kubelet[2586]: E0304 01:04:40.203441 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:40.217962 kubelet[2586]: I0304 01:04:40.217215 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d448765fc-8j94r" podStartSLOduration=41.599778083 podStartE2EDuration="44.217203583s" podCreationTimestamp="2026-03-04 01:03:56 +0000 UTC" firstStartedPulling="2026-03-04 01:04:37.011653713 +0000 UTC m=+67.324746324" lastFinishedPulling="2026-03-04 01:04:39.629079223 +0000 UTC m=+69.942171824" observedRunningTime="2026-03-04 01:04:40.216693148 +0000 UTC m=+70.529785749" watchObservedRunningTime="2026-03-04 01:04:40.217203583 +0000 UTC m=+70.530296184" Mar 4 01:04:40.448738 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Mar 4 01:04:40.464723 containerd[1494]: time="2026-03-04T01:04:40.463270643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:40.466652 containerd[1494]: time="2026-03-04T01:04:40.466258737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 4 01:04:40.468657 containerd[1494]: time="2026-03-04T01:04:40.467855857Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:40.475483 containerd[1494]: time="2026-03-04T01:04:40.475380923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:40.476288 containerd[1494]: time="2026-03-04T01:04:40.476217069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 736.767717ms" Mar 4 01:04:40.476288 containerd[1494]: time="2026-03-04T01:04:40.476275237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 4 01:04:40.484113 containerd[1494]: time="2026-03-04T01:04:40.480838199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 01:04:40.489843 containerd[1494]: time="2026-03-04T01:04:40.489810312Z" level=info msg="CreateContainer within sandbox \"820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 01:04:40.514481 containerd[1494]: time="2026-03-04T01:04:40.514383432Z" level=info msg="CreateContainer within sandbox \"820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0546ef4ac3d0b2c3f45ee772c7148a4f5bb79cb2dd0cbd7ef5177125ad4c18e4\"" Mar 4 01:04:40.516243 containerd[1494]: time="2026-03-04T01:04:40.516171840Z" level=info msg="StartContainer for \"0546ef4ac3d0b2c3f45ee772c7148a4f5bb79cb2dd0cbd7ef5177125ad4c18e4\"" Mar 4 01:04:40.559775 systemd[1]: Started cri-containerd-0546ef4ac3d0b2c3f45ee772c7148a4f5bb79cb2dd0cbd7ef5177125ad4c18e4.scope - libcontainer container 0546ef4ac3d0b2c3f45ee772c7148a4f5bb79cb2dd0cbd7ef5177125ad4c18e4. Mar 4 01:04:40.643105 containerd[1494]: time="2026-03-04T01:04:40.642812702Z" level=info msg="StartContainer for \"0546ef4ac3d0b2c3f45ee772c7148a4f5bb79cb2dd0cbd7ef5177125ad4c18e4\" returns successfully" Mar 4 01:04:41.267555 containerd[1494]: time="2026-03-04T01:04:41.267404526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:41.268692 containerd[1494]: time="2026-03-04T01:04:41.268620157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 4 01:04:41.270239 containerd[1494]: time="2026-03-04T01:04:41.270051070Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:41.273822 containerd[1494]: time="2026-03-04T01:04:41.273755121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:41.275091 containerd[1494]: time="2026-03-04T01:04:41.274845154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 793.963413ms" Mar 4 01:04:41.275091 containerd[1494]: time="2026-03-04T01:04:41.275033948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 4 01:04:41.276468 containerd[1494]: time="2026-03-04T01:04:41.276374585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 01:04:41.301189 containerd[1494]: time="2026-03-04T01:04:41.301076448Z" level=info msg="CreateContainer within sandbox \"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 01:04:41.324228 containerd[1494]: time="2026-03-04T01:04:41.324150018Z" level=info msg="CreateContainer within sandbox \"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fe830dd900e75fe2aa64e31b15b9841f26ffcc8464dfc734e0adc483fb623ce4\"" Mar 4 01:04:41.325147 containerd[1494]: time="2026-03-04T01:04:41.325104658Z" level=info msg="StartContainer for \"fe830dd900e75fe2aa64e31b15b9841f26ffcc8464dfc734e0adc483fb623ce4\"" Mar 4 01:04:41.383135 systemd[1]: Started cri-containerd-fe830dd900e75fe2aa64e31b15b9841f26ffcc8464dfc734e0adc483fb623ce4.scope - libcontainer container fe830dd900e75fe2aa64e31b15b9841f26ffcc8464dfc734e0adc483fb623ce4. Mar 4 01:04:41.448197 containerd[1494]: time="2026-03-04T01:04:41.448148136Z" level=info msg="StartContainer for \"fe830dd900e75fe2aa64e31b15b9841f26ffcc8464dfc734e0adc483fb623ce4\" returns successfully" Mar 4 01:04:41.543121 kubelet[2586]: E0304 01:04:41.542972 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:41.773511 kubelet[2586]: I0304 01:04:41.773428 2586 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 01:04:41.773771 kubelet[2586]: I0304 01:04:41.773537 2586 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 01:04:42.226856 kubelet[2586]: I0304 01:04:42.226651 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d448765fc-92hmt" podStartSLOduration=43.695217987 podStartE2EDuration="46.226629295s" podCreationTimestamp="2026-03-04 01:03:56 +0000 UTC" firstStartedPulling="2026-03-04 01:04:37.205837224 +0000 UTC m=+67.518929825" lastFinishedPulling="2026-03-04 01:04:39.737248521 +0000 UTC m=+70.050341133" observedRunningTime="2026-03-04 01:04:40.242674225 +0000 UTC m=+70.555766866" watchObservedRunningTime="2026-03-04 01:04:42.226629295 +0000 UTC m=+72.539721896" Mar 4 01:04:42.257390 kubelet[2586]: I0304 01:04:42.256096 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-b8wzc" podStartSLOduration=39.388007202 podStartE2EDuration="44.256077056s" podCreationTimestamp="2026-03-04 01:03:58 +0000 UTC" firstStartedPulling="2026-03-04 01:04:36.408155414 +0000 UTC m=+66.721248015" lastFinishedPulling="2026-03-04 01:04:41.276225267 +0000 UTC m=+71.589317869" observedRunningTime="2026-03-04 01:04:42.255787663 +0000 UTC m=+72.568880264" watchObservedRunningTime="2026-03-04 01:04:42.256077056 +0000 UTC m=+72.569169657" Mar 4 01:04:42.529919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986337564.mount: Deactivated successfully. Mar 4 01:04:42.577866 containerd[1494]: time="2026-03-04T01:04:42.576147054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:42.577866 containerd[1494]: time="2026-03-04T01:04:42.577806280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 4 01:04:42.580712 containerd[1494]: time="2026-03-04T01:04:42.580095254Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:42.586844 containerd[1494]: time="2026-03-04T01:04:42.586808841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:42.589423 containerd[1494]: time="2026-03-04T01:04:42.588769470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.312356173s" Mar 4 01:04:42.589423 containerd[1494]: time="2026-03-04T01:04:42.588834120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 4 01:04:42.621363 containerd[1494]: time="2026-03-04T01:04:42.621289736Z" level=info msg="CreateContainer within sandbox \"820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 01:04:42.656303 containerd[1494]: time="2026-03-04T01:04:42.656204069Z" level=info msg="CreateContainer within sandbox \"820f42eb34c6365d19fd28852a637472bacbd49bc331824ff7065022232a0d64\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"508d20b9f4f9a21c6496f64efa1846e1849613fc59dd5d9b41945ceb95474617\"" Mar 4 01:04:42.658882 containerd[1494]: time="2026-03-04T01:04:42.658704312Z" level=info msg="StartContainer for \"508d20b9f4f9a21c6496f64efa1846e1849613fc59dd5d9b41945ceb95474617\"" Mar 4 01:04:42.730082 systemd[1]: Started cri-containerd-508d20b9f4f9a21c6496f64efa1846e1849613fc59dd5d9b41945ceb95474617.scope - libcontainer container 508d20b9f4f9a21c6496f64efa1846e1849613fc59dd5d9b41945ceb95474617. Mar 4 01:04:42.785110 containerd[1494]: time="2026-03-04T01:04:42.784482744Z" level=info msg="StartContainer for \"508d20b9f4f9a21c6496f64efa1846e1849613fc59dd5d9b41945ceb95474617\" returns successfully" Mar 4 01:04:44.543986 containerd[1494]: time="2026-03-04T01:04:44.543906722Z" level=info msg="StopPodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\"" Mar 4 01:04:44.650950 kubelet[2586]: I0304 01:04:44.650183 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-758dfd4f4b-72nsg" podStartSLOduration=3.967795431 podStartE2EDuration="8.650161468s" podCreationTimestamp="2026-03-04 01:04:36 +0000 UTC" firstStartedPulling="2026-03-04 01:04:37.91082553 +0000 UTC m=+68.223918131" lastFinishedPulling="2026-03-04 01:04:42.593191567 +0000 UTC m=+72.906284168" observedRunningTime="2026-03-04 01:04:43.244972196 +0000 UTC m=+73.558064797" watchObservedRunningTime="2026-03-04 01:04:44.650161468 +0000 UTC m=+74.963254069" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.653 [INFO][5017] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.654 [INFO][5017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" iface="eth0" netns="/var/run/netns/cni-7ea17e78-6eef-99df-70f5-0585d13ff4e0" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.654 [INFO][5017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" iface="eth0" netns="/var/run/netns/cni-7ea17e78-6eef-99df-70f5-0585d13ff4e0" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.655 [INFO][5017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" iface="eth0" netns="/var/run/netns/cni-7ea17e78-6eef-99df-70f5-0585d13ff4e0" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.655 [INFO][5017] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.655 [INFO][5017] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.716 [INFO][5026] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.716 [INFO][5026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.716 [INFO][5026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.727 [WARNING][5026] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.727 [INFO][5026] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.730 [INFO][5026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:44.737839 containerd[1494]: 2026-03-04 01:04:44.734 [INFO][5017] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:04:44.738469 containerd[1494]: time="2026-03-04T01:04:44.738436015Z" level=info msg="TearDown network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" successfully" Mar 4 01:04:44.738500 containerd[1494]: time="2026-03-04T01:04:44.738475239Z" level=info msg="StopPodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" returns successfully" Mar 4 01:04:44.741665 systemd[1]: run-netns-cni\x2d7ea17e78\x2d6eef\x2d99df\x2d70f5\x2d0585d13ff4e0.mount: Deactivated successfully. Mar 4 01:04:44.749069 containerd[1494]: time="2026-03-04T01:04:44.748891145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8cffdcd-9vxkx,Uid:ac58af22-43eb-4c31-9dcb-bdf8a11f443d,Namespace:calico-system,Attempt:1,}" Mar 4 01:04:44.966458 systemd-networkd[1407]: cali637451f3f81: Link UP Mar 4 01:04:44.967814 systemd-networkd[1407]: cali637451f3f81: Gained carrier Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.836 [INFO][5035] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0 calico-kube-controllers-cd8cffdcd- calico-system ac58af22-43eb-4c31-9dcb-bdf8a11f443d 1143 0 2026-03-04 01:03:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd8cffdcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cd8cffdcd-9vxkx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali637451f3f81 [] [] }} ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.837 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.877 [INFO][5049] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" HandleID="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.891 [INFO][5049] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" HandleID="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cd8cffdcd-9vxkx", "timestamp":"2026-03-04 01:04:44.877416573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001982c0)} Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.891 [INFO][5049] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.891 [INFO][5049] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.892 [INFO][5049] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.909 [INFO][5049] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.917 [INFO][5049] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.924 [INFO][5049] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.927 [INFO][5049] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.932 [INFO][5049] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.932 [INFO][5049] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.935 [INFO][5049] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.942 [INFO][5049] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.956 [INFO][5049] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.958 [INFO][5049] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" host="localhost" Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.958 [INFO][5049] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:45.002628 containerd[1494]: 2026-03-04 01:04:44.958 [INFO][5049] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" HandleID="k8s-pod-network.f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.006427 containerd[1494]: 2026-03-04 01:04:44.962 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0", GenerateName:"calico-kube-controllers-cd8cffdcd-", Namespace:"calico-system", SelfLink:"", UID:"ac58af22-43eb-4c31-9dcb-bdf8a11f443d", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8cffdcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cd8cffdcd-9vxkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali637451f3f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:45.006427 containerd[1494]: 2026-03-04 01:04:44.962 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.006427 containerd[1494]: 2026-03-04 01:04:44.962 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali637451f3f81 ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.006427 containerd[1494]: 2026-03-04 01:04:44.968 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.006427 containerd[1494]: 2026-03-04 01:04:44.969 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0", GenerateName:"calico-kube-controllers-cd8cffdcd-", Namespace:"calico-system", SelfLink:"", UID:"ac58af22-43eb-4c31-9dcb-bdf8a11f443d", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8cffdcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f", Pod:"calico-kube-controllers-cd8cffdcd-9vxkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali637451f3f81", MAC:"d2:a2:ea:d4:4d:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:45.006427 containerd[1494]: 2026-03-04 01:04:44.989 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f" Namespace="calico-system" Pod="calico-kube-controllers-cd8cffdcd-9vxkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:04:45.046649 containerd[1494]: time="2026-03-04T01:04:45.046151238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:45.046649 containerd[1494]: time="2026-03-04T01:04:45.046257787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:45.046649 containerd[1494]: time="2026-03-04T01:04:45.046282473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:45.046649 containerd[1494]: time="2026-03-04T01:04:45.046479101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:45.090493 systemd[1]: Started cri-containerd-f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f.scope - libcontainer container f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f. Mar 4 01:04:45.113414 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:45.153472 containerd[1494]: time="2026-03-04T01:04:45.153424730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8cffdcd-9vxkx,Uid:ac58af22-43eb-4c31-9dcb-bdf8a11f443d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f\"" Mar 4 01:04:45.156490 containerd[1494]: time="2026-03-04T01:04:45.156367077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 01:04:45.548116 containerd[1494]: time="2026-03-04T01:04:45.547916108Z" level=info msg="StopPodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\"" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.642 [INFO][5128] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.643 [INFO][5128] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" iface="eth0" netns="/var/run/netns/cni-e0240e6d-86f2-cded-f0dc-2cc56e7089a2" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.645 [INFO][5128] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" iface="eth0" netns="/var/run/netns/cni-e0240e6d-86f2-cded-f0dc-2cc56e7089a2" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.645 [INFO][5128] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" iface="eth0" netns="/var/run/netns/cni-e0240e6d-86f2-cded-f0dc-2cc56e7089a2" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.646 [INFO][5128] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.646 [INFO][5128] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.829 [INFO][5136] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.829 [INFO][5136] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.830 [INFO][5136] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.856 [WARNING][5136] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.857 [INFO][5136] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.878 [INFO][5136] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:45.888918 containerd[1494]: 2026-03-04 01:04:45.884 [INFO][5128] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:04:45.892655 containerd[1494]: time="2026-03-04T01:04:45.890137058Z" level=info msg="TearDown network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" successfully" Mar 4 01:04:45.892655 containerd[1494]: time="2026-03-04T01:04:45.890181232Z" level=info msg="StopPodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" returns successfully" Mar 4 01:04:45.894151 systemd[1]: run-netns-cni\x2de0240e6d\x2d86f2\x2dcded\x2df0dc\x2d2cc56e7089a2.mount: Deactivated successfully. Mar 4 01:04:45.899195 containerd[1494]: time="2026-03-04T01:04:45.899139168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-n4mqq,Uid:544c68eb-a5e1-4a51-ad76-27bba7dba868,Namespace:calico-system,Attempt:1,}" Mar 4 01:04:46.137199 systemd-networkd[1407]: cali56a0f6aba51: Link UP Mar 4 01:04:46.139706 systemd-networkd[1407]: cali56a0f6aba51: Gained carrier Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.007 [INFO][5144] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0 goldmane-9f7667bb8- calico-system 544c68eb-a5e1-4a51-ad76-27bba7dba868 1149 0 2026-03-04 01:03:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-n4mqq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali56a0f6aba51 [] [] }} ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.007 [INFO][5144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.061 [INFO][5159] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" HandleID="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.071 [INFO][5159] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" HandleID="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e8090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-n4mqq", "timestamp":"2026-03-04 01:04:46.061295373 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00046e580)} Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.072 [INFO][5159] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.072 [INFO][5159] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.072 [INFO][5159] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.078 [INFO][5159] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.093 [INFO][5159] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.104 [INFO][5159] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.107 [INFO][5159] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.112 [INFO][5159] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.112 [INFO][5159] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.115 [INFO][5159] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.120 [INFO][5159] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.129 [INFO][5159] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.129 [INFO][5159] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" host="localhost" Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.129 [INFO][5159] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:46.220787 containerd[1494]: 2026-03-04 01:04:46.129 [INFO][5159] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" HandleID="k8s-pod-network.4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.222138 containerd[1494]: 2026-03-04 01:04:46.133 [INFO][5144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"544c68eb-a5e1-4a51-ad76-27bba7dba868", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-n4mqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56a0f6aba51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:46.222138 containerd[1494]: 2026-03-04 01:04:46.133 [INFO][5144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.222138 containerd[1494]: 2026-03-04 01:04:46.133 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56a0f6aba51 ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.222138 containerd[1494]: 2026-03-04 01:04:46.138 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.222138 containerd[1494]: 2026-03-04 01:04:46.139 [INFO][5144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"544c68eb-a5e1-4a51-ad76-27bba7dba868", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd", Pod:"goldmane-9f7667bb8-n4mqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56a0f6aba51", MAC:"da:9a:b4:81:0c:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:46.222138 containerd[1494]: 2026-03-04 01:04:46.211 [INFO][5144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd" Namespace="calico-system" Pod="goldmane-9f7667bb8-n4mqq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:04:46.457995 containerd[1494]: time="2026-03-04T01:04:46.454986330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:46.457995 containerd[1494]: time="2026-03-04T01:04:46.455097047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:46.457995 containerd[1494]: time="2026-03-04T01:04:46.455114520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:46.457995 containerd[1494]: time="2026-03-04T01:04:46.455233633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:46.504824 systemd[1]: Started cri-containerd-4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd.scope - libcontainer container 4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd. Mar 4 01:04:46.528226 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:46.574471 containerd[1494]: time="2026-03-04T01:04:46.574303227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-n4mqq,Uid:544c68eb-a5e1-4a51-ad76-27bba7dba868,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd\"" Mar 4 01:04:47.034170 systemd-networkd[1407]: cali637451f3f81: Gained IPv6LL Mar 4 01:04:47.598174 containerd[1494]: time="2026-03-04T01:04:47.598087246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:47.630641 containerd[1494]: time="2026-03-04T01:04:47.630441619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 4 01:04:47.632847 containerd[1494]: time="2026-03-04T01:04:47.632708178Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:47.637726 containerd[1494]: time="2026-03-04T01:04:47.637391646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:47.652906 containerd[1494]: time="2026-03-04T01:04:47.652746863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.496327607s" Mar 4 01:04:47.652906 containerd[1494]: time="2026-03-04T01:04:47.652814670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 4 01:04:47.658131 containerd[1494]: time="2026-03-04T01:04:47.657670544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 01:04:47.674545 containerd[1494]: time="2026-03-04T01:04:47.674402824Z" level=info msg="CreateContainer within sandbox \"f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 01:04:47.708730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836228143.mount: Deactivated successfully. Mar 4 01:04:47.711842 containerd[1494]: time="2026-03-04T01:04:47.711754757Z" level=info msg="CreateContainer within sandbox \"f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d1ba328a36573e74ef1be432ddb3b962a307d0c40ea3139f6d259b60d8b9fac4\"" Mar 4 01:04:47.713710 containerd[1494]: time="2026-03-04T01:04:47.712847172Z" level=info msg="StartContainer for \"d1ba328a36573e74ef1be432ddb3b962a307d0c40ea3139f6d259b60d8b9fac4\"" Mar 4 01:04:47.842833 systemd[1]: Started cri-containerd-d1ba328a36573e74ef1be432ddb3b962a307d0c40ea3139f6d259b60d8b9fac4.scope - libcontainer container d1ba328a36573e74ef1be432ddb3b962a307d0c40ea3139f6d259b60d8b9fac4. Mar 4 01:04:47.929979 systemd-networkd[1407]: cali56a0f6aba51: Gained IPv6LL Mar 4 01:04:47.933788 containerd[1494]: time="2026-03-04T01:04:47.933691339Z" level=info msg="StartContainer for \"d1ba328a36573e74ef1be432ddb3b962a307d0c40ea3139f6d259b60d8b9fac4\" returns successfully" Mar 4 01:04:48.304610 kubelet[2586]: I0304 01:04:48.304477 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cd8cffdcd-9vxkx" podStartSLOduration=47.803457367 podStartE2EDuration="50.304458073s" podCreationTimestamp="2026-03-04 01:03:58 +0000 UTC" firstStartedPulling="2026-03-04 01:04:45.155766132 +0000 UTC m=+75.468858733" lastFinishedPulling="2026-03-04 01:04:47.656766818 +0000 UTC m=+77.969859439" observedRunningTime="2026-03-04 01:04:48.303154955 +0000 UTC m=+78.616247556" watchObservedRunningTime="2026-03-04 01:04:48.304458073 +0000 UTC m=+78.617550684" Mar 4 01:04:49.544432 containerd[1494]: time="2026-03-04T01:04:49.544244242Z" level=info msg="StopPodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\"" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.621 [INFO][5333] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.621 [INFO][5333] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" iface="eth0" netns="/var/run/netns/cni-2389ed39-9351-12d4-0e8d-0c519045fbe6" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.622 [INFO][5333] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" iface="eth0" netns="/var/run/netns/cni-2389ed39-9351-12d4-0e8d-0c519045fbe6" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.622 [INFO][5333] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" iface="eth0" netns="/var/run/netns/cni-2389ed39-9351-12d4-0e8d-0c519045fbe6" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.622 [INFO][5333] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.622 [INFO][5333] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.674 [INFO][5341] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.674 [INFO][5341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.674 [INFO][5341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.684 [WARNING][5341] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.684 [INFO][5341] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.687 [INFO][5341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:49.694478 containerd[1494]: 2026-03-04 01:04:49.690 [INFO][5333] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:04:49.695321 containerd[1494]: time="2026-03-04T01:04:49.694959855Z" level=info msg="TearDown network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" successfully" Mar 4 01:04:49.695321 containerd[1494]: time="2026-03-04T01:04:49.694989692Z" level=info msg="StopPodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" returns successfully" Mar 4 01:04:49.698914 kubelet[2586]: E0304 01:04:49.698862 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:49.699731 systemd[1]: run-netns-cni\x2d2389ed39\x2d9351\x2d12d4\x2d0e8d\x2d0c519045fbe6.mount: Deactivated successfully. Mar 4 01:04:49.700172 containerd[1494]: time="2026-03-04T01:04:49.699758926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtjj8,Uid:aadc3f76-505a-444f-bf40-597198f092e8,Namespace:kube-system,Attempt:1,}" Mar 4 01:04:49.920858 systemd-networkd[1407]: calif9229c85900: Link UP Mar 4 01:04:49.922140 systemd-networkd[1407]: calif9229c85900: Gained carrier Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.802 [INFO][5349] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--jtjj8-eth0 coredns-7d764666f9- kube-system aadc3f76-505a-444f-bf40-597198f092e8 1183 0 2026-03-04 01:03:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-jtjj8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9229c85900 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.803 [INFO][5349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.848 [INFO][5363] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" HandleID="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.858 [INFO][5363] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" HandleID="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037cb70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-jtjj8", "timestamp":"2026-03-04 01:04:49.848517486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000210dc0)} Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.859 [INFO][5363] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.859 [INFO][5363] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.859 [INFO][5363] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.863 [INFO][5363] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.872 [INFO][5363] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.880 [INFO][5363] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.888 [INFO][5363] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.892 [INFO][5363] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.893 [INFO][5363] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.895 [INFO][5363] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.901 [INFO][5363] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.911 [INFO][5363] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.911 [INFO][5363] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" host="localhost" Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.911 [INFO][5363] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:49.944219 containerd[1494]: 2026-03-04 01:04:49.911 [INFO][5363] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" HandleID="k8s-pod-network.2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.944995 containerd[1494]: 2026-03-04 01:04:49.916 [INFO][5349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jtjj8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"aadc3f76-505a-444f-bf40-597198f092e8", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-jtjj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9229c85900", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:49.944995 containerd[1494]: 2026-03-04 01:04:49.916 [INFO][5349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.944995 containerd[1494]: 2026-03-04 01:04:49.916 [INFO][5349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9229c85900 ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.944995 containerd[1494]: 2026-03-04 01:04:49.921 [INFO][5349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:49.944995 containerd[1494]: 2026-03-04 01:04:49.922 [INFO][5349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jtjj8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"aadc3f76-505a-444f-bf40-597198f092e8", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a", Pod:"coredns-7d764666f9-jtjj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9229c85900", MAC:"ce:b2:bf:90:c7:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:49.944995 containerd[1494]: 2026-03-04 01:04:49.938 [INFO][5349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a" Namespace="kube-system" Pod="coredns-7d764666f9-jtjj8" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:04:50.020146 containerd[1494]: time="2026-03-04T01:04:50.016806191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:04:50.020146 containerd[1494]: time="2026-03-04T01:04:50.019900471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:04:50.020146 containerd[1494]: time="2026-03-04T01:04:50.019951656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:50.020430 containerd[1494]: time="2026-03-04T01:04:50.020264772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:04:50.066830 systemd[1]: Started cri-containerd-2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a.scope - libcontainer container 2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a. Mar 4 01:04:50.100455 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:04:50.139008 containerd[1494]: time="2026-03-04T01:04:50.138893764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jtjj8,Uid:aadc3f76-505a-444f-bf40-597198f092e8,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a\"" Mar 4 01:04:50.141121 kubelet[2586]: E0304 01:04:50.140846 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:50.148695 containerd[1494]: time="2026-03-04T01:04:50.148553658Z" level=info msg="CreateContainer within sandbox \"2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:04:50.168296 containerd[1494]: time="2026-03-04T01:04:50.168151865Z" level=info msg="CreateContainer within sandbox \"2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19240998668142616a76ed39b57106edd1635904b0482c681fb26d5d99c82f84\"" Mar 4 01:04:50.169407 containerd[1494]: time="2026-03-04T01:04:50.169311284Z" level=info msg="StartContainer for \"19240998668142616a76ed39b57106edd1635904b0482c681fb26d5d99c82f84\"" Mar 4 01:04:50.235388 systemd[1]: Started cri-containerd-19240998668142616a76ed39b57106edd1635904b0482c681fb26d5d99c82f84.scope - libcontainer container 19240998668142616a76ed39b57106edd1635904b0482c681fb26d5d99c82f84. Mar 4 01:04:50.299875 containerd[1494]: time="2026-03-04T01:04:50.290264081Z" level=info msg="StartContainer for \"19240998668142616a76ed39b57106edd1635904b0482c681fb26d5d99c82f84\" returns successfully" Mar 4 01:04:50.308158 kubelet[2586]: E0304 01:04:50.307935 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:50.702828 systemd[1]: run-containerd-runc-k8s.io-2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a-runc.NWVUd5.mount: Deactivated successfully. Mar 4 01:04:51.083730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889750751.mount: Deactivated successfully. Mar 4 01:04:51.311287 kubelet[2586]: E0304 01:04:51.311143 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:51.336515 kubelet[2586]: I0304 01:04:51.335446 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jtjj8" podStartSLOduration=76.335427186 podStartE2EDuration="1m16.335427186s" podCreationTimestamp="2026-03-04 01:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:04:50.326550523 +0000 UTC m=+80.639643184" watchObservedRunningTime="2026-03-04 01:04:51.335427186 +0000 UTC m=+81.648519788" Mar 4 01:04:51.450168 systemd-networkd[1407]: calif9229c85900: Gained IPv6LL Mar 4 01:04:51.700184 containerd[1494]: time="2026-03-04T01:04:51.699821295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:51.701678 containerd[1494]: time="2026-03-04T01:04:51.701620913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 4 01:04:51.703673 containerd[1494]: time="2026-03-04T01:04:51.703629541Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:51.706853 containerd[1494]: time="2026-03-04T01:04:51.706687167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:51.707381 containerd[1494]: time="2026-03-04T01:04:51.707325299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.049615332s" Mar 4 01:04:51.707381 containerd[1494]: time="2026-03-04T01:04:51.707373590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 4 01:04:51.715368 containerd[1494]: time="2026-03-04T01:04:51.715266048Z" level=info msg="CreateContainer within sandbox \"4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 01:04:51.746810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508159360.mount: Deactivated successfully. Mar 4 01:04:51.747650 containerd[1494]: time="2026-03-04T01:04:51.747346550Z" level=info msg="CreateContainer within sandbox \"4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1\"" Mar 4 01:04:51.748882 containerd[1494]: time="2026-03-04T01:04:51.748818765Z" level=info msg="StartContainer for \"a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1\"" Mar 4 01:04:51.789113 systemd[1]: run-containerd-runc-k8s.io-a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1-runc.w8jGNn.mount: Deactivated successfully. Mar 4 01:04:51.800772 systemd[1]: Started cri-containerd-a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1.scope - libcontainer container a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1. Mar 4 01:04:51.862493 containerd[1494]: time="2026-03-04T01:04:51.862394654Z" level=info msg="StartContainer for \"a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1\" returns successfully" Mar 4 01:04:52.317399 kubelet[2586]: E0304 01:04:52.317318 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:52.340428 kubelet[2586]: I0304 01:04:52.340316 2586 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-n4mqq" podStartSLOduration=50.209133023 podStartE2EDuration="55.340295136s" podCreationTimestamp="2026-03-04 01:03:57 +0000 UTC" firstStartedPulling="2026-03-04 01:04:46.577007959 +0000 UTC m=+76.890100560" lastFinishedPulling="2026-03-04 01:04:51.708170072 +0000 UTC m=+82.021262673" observedRunningTime="2026-03-04 01:04:52.339674339 +0000 UTC m=+82.652766940" watchObservedRunningTime="2026-03-04 01:04:52.340295136 +0000 UTC m=+82.653387737" Mar 4 01:04:53.323802 kubelet[2586]: E0304 01:04:53.323523 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:54.543129 kubelet[2586]: E0304 01:04:54.542819 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:55.547656 kubelet[2586]: E0304 01:04:55.545047 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:55.596024 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Mar 4 01:04:55.744049 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:55.751149 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:55.765372 systemd-logind[1464]: New session 10 of user core. Mar 4 01:04:55.771959 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:04:56.387721 sshd[5589]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:56.393332 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:58082.service: Deactivated successfully. Mar 4 01:04:56.396349 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:04:56.397443 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:04:56.399908 systemd-logind[1464]: Removed session 10. Mar 4 01:05:01.403185 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:58092.service - OpenSSH per-connection server daemon (10.0.0.1:58092). Mar 4 01:05:01.508398 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 58092 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:01.511403 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:01.518182 systemd-logind[1464]: New session 11 of user core. Mar 4 01:05:01.528910 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:05:01.731819 sshd[5634]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:01.740332 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:58092.service: Deactivated successfully. Mar 4 01:05:01.743509 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:05:01.746970 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:05:01.749073 systemd-logind[1464]: Removed session 11. Mar 4 01:05:06.763098 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:37106.service - OpenSSH per-connection server daemon (10.0.0.1:37106). Mar 4 01:05:06.905423 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 37106 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:06.910426 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:06.921273 systemd-logind[1464]: New session 12 of user core. Mar 4 01:05:06.937907 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:05:07.156431 sshd[5652]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:07.164266 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:37106.service: Deactivated successfully. Mar 4 01:05:07.167404 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:05:07.169133 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:05:07.174376 systemd-logind[1464]: Removed session 12. Mar 4 01:05:12.176081 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:34574.service - OpenSSH per-connection server daemon (10.0.0.1:34574). Mar 4 01:05:12.247141 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 34574 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:12.249789 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:12.258002 systemd-logind[1464]: New session 13 of user core. Mar 4 01:05:12.263879 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:05:12.460404 sshd[5699]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:12.465114 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:34574.service: Deactivated successfully. Mar 4 01:05:12.468738 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:05:12.472049 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:05:12.474118 systemd-logind[1464]: Removed session 13. Mar 4 01:05:17.479867 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:34590.service - OpenSSH per-connection server daemon (10.0.0.1:34590). Mar 4 01:05:17.551871 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 34590 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:17.554113 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:17.562031 systemd-logind[1464]: New session 14 of user core. Mar 4 01:05:17.567875 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:05:17.723047 sshd[5714]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:17.729116 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:34590.service: Deactivated successfully. Mar 4 01:05:17.732117 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:05:17.734485 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:05:17.736853 systemd-logind[1464]: Removed session 14. Mar 4 01:05:18.543158 kubelet[2586]: E0304 01:05:18.542928 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:22.742671 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:53786.service - OpenSSH per-connection server daemon (10.0.0.1:53786). Mar 4 01:05:22.800097 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 53786 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:22.801829 sshd[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:22.807893 systemd-logind[1464]: New session 15 of user core. Mar 4 01:05:22.813041 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:05:22.958375 sshd[5780]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:22.967405 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:53786.service: Deactivated successfully. Mar 4 01:05:22.969355 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:05:22.971654 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:05:22.980486 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:53798.service - OpenSSH per-connection server daemon (10.0.0.1:53798). Mar 4 01:05:22.982420 systemd-logind[1464]: Removed session 15. Mar 4 01:05:23.018285 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 53798 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:23.020757 sshd[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:23.027325 systemd-logind[1464]: New session 16 of user core. Mar 4 01:05:23.034870 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:05:23.282880 sshd[5795]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:23.301381 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:53798.service: Deactivated successfully. Mar 4 01:05:23.307376 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:05:23.312292 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:05:23.326021 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Mar 4 01:05:23.332684 systemd-logind[1464]: Removed session 16. Mar 4 01:05:23.393893 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:23.397052 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:23.405785 systemd-logind[1464]: New session 17 of user core. Mar 4 01:05:23.410804 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:05:23.565484 sshd[5807]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:23.571870 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:53814.service: Deactivated successfully. Mar 4 01:05:23.574840 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:05:23.576038 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:05:23.578683 systemd-logind[1464]: Removed session 17. Mar 4 01:05:28.595327 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:53822.service - OpenSSH per-connection server daemon (10.0.0.1:53822). Mar 4 01:05:28.639659 sshd[5846]: Accepted publickey for core from 10.0.0.1 port 53822 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:28.641811 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:28.648969 systemd-logind[1464]: New session 18 of user core. Mar 4 01:05:28.655800 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:05:28.825406 sshd[5846]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:28.834429 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:53822.service: Deactivated successfully. Mar 4 01:05:28.837790 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:05:28.839215 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:05:28.842962 systemd-logind[1464]: Removed session 18. Mar 4 01:05:30.327492 containerd[1494]: time="2026-03-04T01:05:30.327365526Z" level=info msg="StopPodSandbox for \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\"" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.438 [WARNING][5875] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"72e16b4b-2414-436d-9047-7c572057b1a0", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651", Pod:"calico-apiserver-7d448765fc-8j94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliebbf1b98ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.441 [INFO][5875] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.441 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" iface="eth0" netns="" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.441 [INFO][5875] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.441 [INFO][5875] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.531 [INFO][5883] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.533 [INFO][5883] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.534 [INFO][5883] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.543 [WARNING][5883] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.543 [INFO][5883] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.546 [INFO][5883] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:30.554320 containerd[1494]: 2026-03-04 01:05:30.551 [INFO][5875] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.555053 containerd[1494]: time="2026-03-04T01:05:30.554942388Z" level=info msg="TearDown network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\" successfully" Mar 4 01:05:30.555053 containerd[1494]: time="2026-03-04T01:05:30.555008391Z" level=info msg="StopPodSandbox for \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\" returns successfully" Mar 4 01:05:30.618236 containerd[1494]: time="2026-03-04T01:05:30.618003447Z" level=info msg="RemovePodSandbox for \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\"" Mar 4 01:05:30.621744 containerd[1494]: time="2026-03-04T01:05:30.621539397Z" level=info msg="Forcibly stopping sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\"" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.677 [WARNING][5903] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"72e16b4b-2414-436d-9047-7c572057b1a0", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28184c0c153e7bd1685c1c423b19a79f2949007c5c56af14e220c8fe03037651", Pod:"calico-apiserver-7d448765fc-8j94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliebbf1b98ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.677 [INFO][5903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.677 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" iface="eth0" netns="" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.677 [INFO][5903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.677 [INFO][5903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.725 [INFO][5911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.725 [INFO][5911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.725 [INFO][5911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.734 [WARNING][5911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.734 [INFO][5911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" HandleID="k8s-pod-network.7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Workload="localhost-k8s-calico--apiserver--7d448765fc--8j94r-eth0" Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.737 [INFO][5911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:30.744539 containerd[1494]: 2026-03-04 01:05:30.740 [INFO][5903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a" Mar 4 01:05:30.745967 containerd[1494]: time="2026-03-04T01:05:30.744650451Z" level=info msg="TearDown network for sandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\" successfully" Mar 4 01:05:30.767312 containerd[1494]: time="2026-03-04T01:05:30.767185020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:30.767499 containerd[1494]: time="2026-03-04T01:05:30.767349078Z" level=info msg="RemovePodSandbox \"7d37470c2db1d5be4e1994de60db57e05b645bad4e57b2207a640e59d557f17a\" returns successfully" Mar 4 01:05:30.780985 containerd[1494]: time="2026-03-04T01:05:30.780933772Z" level=info msg="StopPodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\"" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.845 [WARNING][5928] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"544c68eb-a5e1-4a51-ad76-27bba7dba868", ResourceVersion:"1401", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd", Pod:"goldmane-9f7667bb8-n4mqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56a0f6aba51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.845 [INFO][5928] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.845 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" iface="eth0" netns="" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.846 [INFO][5928] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.846 [INFO][5928] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.886 [INFO][5936] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.886 [INFO][5936] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.886 [INFO][5936] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.894 [WARNING][5936] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.895 [INFO][5936] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.898 [INFO][5936] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:30.907656 containerd[1494]: 2026-03-04 01:05:30.904 [INFO][5928] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:30.907656 containerd[1494]: time="2026-03-04T01:05:30.907448204Z" level=info msg="TearDown network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" successfully" Mar 4 01:05:30.907656 containerd[1494]: time="2026-03-04T01:05:30.907489140Z" level=info msg="StopPodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" returns successfully" Mar 4 01:05:30.909100 containerd[1494]: time="2026-03-04T01:05:30.908102277Z" level=info msg="RemovePodSandbox for \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\"" Mar 4 01:05:30.909100 containerd[1494]: time="2026-03-04T01:05:30.908132253Z" level=info msg="Forcibly stopping sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\"" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:30.974 [WARNING][5954] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"544c68eb-a5e1-4a51-ad76-27bba7dba868", ResourceVersion:"1401", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d8d84ffe0c0c40ce726890c6c31bda8cb9bbc512f055f5802253659e41c84bd", Pod:"goldmane-9f7667bb8-n4mqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56a0f6aba51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:30.974 [INFO][5954] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:30.974 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" iface="eth0" netns="" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:30.974 [INFO][5954] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:30.974 [INFO][5954] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.013 [INFO][5963] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.013 [INFO][5963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.013 [INFO][5963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.021 [WARNING][5963] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.022 [INFO][5963] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" HandleID="k8s-pod-network.162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Workload="localhost-k8s-goldmane--9f7667bb8--n4mqq-eth0" Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.025 [INFO][5963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.032199 containerd[1494]: 2026-03-04 01:05:31.028 [INFO][5954] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560" Mar 4 01:05:31.033384 containerd[1494]: time="2026-03-04T01:05:31.033316583Z" level=info msg="TearDown network for sandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" successfully" Mar 4 01:05:31.048093 containerd[1494]: time="2026-03-04T01:05:31.047990971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:31.048093 containerd[1494]: time="2026-03-04T01:05:31.048094625Z" level=info msg="RemovePodSandbox \"162bb0f55f84d0c55aab5a02169d2385688e41adb27192ab98a1d390c3f36560\" returns successfully" Mar 4 01:05:31.049155 containerd[1494]: time="2026-03-04T01:05:31.048992292Z" level=info msg="StopPodSandbox for \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\"" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.109 [WARNING][5982] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b8wzc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70e9d987-0384-4b7c-aa94-bbc127680682", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553", Pod:"csi-node-driver-b8wzc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b789a1bc6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.109 [INFO][5982] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.109 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" iface="eth0" netns="" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.109 [INFO][5982] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.109 [INFO][5982] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.146 [INFO][5991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.147 [INFO][5991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.147 [INFO][5991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.156 [WARNING][5991] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.156 [INFO][5991] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.158 [INFO][5991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.166951 containerd[1494]: 2026-03-04 01:05:31.161 [INFO][5982] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.166951 containerd[1494]: time="2026-03-04T01:05:31.165238196Z" level=info msg="TearDown network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\" successfully" Mar 4 01:05:31.166951 containerd[1494]: time="2026-03-04T01:05:31.165321533Z" level=info msg="StopPodSandbox for \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\" returns successfully" Mar 4 01:05:31.166951 containerd[1494]: time="2026-03-04T01:05:31.166129087Z" level=info msg="RemovePodSandbox for \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\"" Mar 4 01:05:31.166951 containerd[1494]: time="2026-03-04T01:05:31.166172398Z" level=info msg="Forcibly stopping sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\"" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.223 [WARNING][6008] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b8wzc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70e9d987-0384-4b7c-aa94-bbc127680682", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38eb9ab6f7f46e0973129dae74f150904cd5b7ae29c6d440098701069b804553", Pod:"csi-node-driver-b8wzc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b789a1bc6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.224 [INFO][6008] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.224 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" iface="eth0" netns="" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.224 [INFO][6008] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.224 [INFO][6008] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.259 [INFO][6016] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.259 [INFO][6016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.259 [INFO][6016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.267 [WARNING][6016] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.267 [INFO][6016] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" HandleID="k8s-pod-network.715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Workload="localhost-k8s-csi--node--driver--b8wzc-eth0" Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.270 [INFO][6016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.278257 containerd[1494]: 2026-03-04 01:05:31.274 [INFO][6008] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c" Mar 4 01:05:31.278257 containerd[1494]: time="2026-03-04T01:05:31.278225051Z" level=info msg="TearDown network for sandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\" successfully" Mar 4 01:05:31.296953 containerd[1494]: time="2026-03-04T01:05:31.296835717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:31.297064 containerd[1494]: time="2026-03-04T01:05:31.296966381Z" level=info msg="RemovePodSandbox \"715176b397af29fe03d84722c5690eb6ed74f437d183f5c788f5f3034a75cb0c\" returns successfully" Mar 4 01:05:31.297909 containerd[1494]: time="2026-03-04T01:05:31.297812784Z" level=info msg="StopPodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\"" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.355 [WARNING][6033] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jtjj8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"aadc3f76-505a-444f-bf40-597198f092e8", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a", Pod:"coredns-7d764666f9-jtjj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9229c85900", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.356 [INFO][6033] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.356 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" iface="eth0" netns="" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.356 [INFO][6033] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.356 [INFO][6033] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.403 [INFO][6041] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.403 [INFO][6041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.403 [INFO][6041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.413 [WARNING][6041] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.413 [INFO][6041] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.416 [INFO][6041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.423967 containerd[1494]: 2026-03-04 01:05:31.420 [INFO][6033] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.423967 containerd[1494]: time="2026-03-04T01:05:31.423811867Z" level=info msg="TearDown network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" successfully" Mar 4 01:05:31.423967 containerd[1494]: time="2026-03-04T01:05:31.423854557Z" level=info msg="StopPodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" returns successfully" Mar 4 01:05:31.426676 containerd[1494]: time="2026-03-04T01:05:31.424768630Z" level=info msg="RemovePodSandbox for \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\"" Mar 4 01:05:31.426676 containerd[1494]: time="2026-03-04T01:05:31.424816740Z" level=info msg="Forcibly stopping sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\"" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.507 [WARNING][6058] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--jtjj8-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"aadc3f76-505a-444f-bf40-597198f092e8", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ccc96d9f6e2455e48a5dbbe81e7d2d89ce03b0fc0106b349bc68e6959226d7a", Pod:"coredns-7d764666f9-jtjj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9229c85900", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.507 [INFO][6058] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.507 [INFO][6058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" iface="eth0" netns="" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.507 [INFO][6058] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.507 [INFO][6058] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.548 [INFO][6066] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.548 [INFO][6066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.548 [INFO][6066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.558 [WARNING][6066] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.558 [INFO][6066] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" HandleID="k8s-pod-network.62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Workload="localhost-k8s-coredns--7d764666f9--jtjj8-eth0" Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.562 [INFO][6066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.572909 containerd[1494]: 2026-03-04 01:05:31.568 [INFO][6058] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969" Mar 4 01:05:31.572909 containerd[1494]: time="2026-03-04T01:05:31.572892280Z" level=info msg="TearDown network for sandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" successfully" Mar 4 01:05:31.579509 containerd[1494]: time="2026-03-04T01:05:31.579298373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:31.579509 containerd[1494]: time="2026-03-04T01:05:31.579382360Z" level=info msg="RemovePodSandbox \"62abd9d321b980cd8fd4714c9c81497be60472e9e4f71f8a39050e2d8390c969\" returns successfully" Mar 4 01:05:31.580184 containerd[1494]: time="2026-03-04T01:05:31.580091396Z" level=info msg="StopPodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\"" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.645 [WARNING][6082] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0", GenerateName:"calico-kube-controllers-cd8cffdcd-", Namespace:"calico-system", SelfLink:"", UID:"ac58af22-43eb-4c31-9dcb-bdf8a11f443d", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8cffdcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f", Pod:"calico-kube-controllers-cd8cffdcd-9vxkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali637451f3f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.646 [INFO][6082] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.646 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" iface="eth0" netns="" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.646 [INFO][6082] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.646 [INFO][6082] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.679 [INFO][6090] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.680 [INFO][6090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.680 [INFO][6090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.690 [WARNING][6090] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.690 [INFO][6090] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.694 [INFO][6090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.702300 containerd[1494]: 2026-03-04 01:05:31.698 [INFO][6082] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.702888 containerd[1494]: time="2026-03-04T01:05:31.702240128Z" level=info msg="TearDown network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" successfully" Mar 4 01:05:31.702888 containerd[1494]: time="2026-03-04T01:05:31.702330687Z" level=info msg="StopPodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" returns successfully" Mar 4 01:05:31.703396 containerd[1494]: time="2026-03-04T01:05:31.703238927Z" level=info msg="RemovePodSandbox for \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\"" Mar 4 01:05:31.703396 containerd[1494]: time="2026-03-04T01:05:31.703371175Z" level=info msg="Forcibly stopping sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\"" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.763 [WARNING][6107] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0", GenerateName:"calico-kube-controllers-cd8cffdcd-", Namespace:"calico-system", SelfLink:"", UID:"ac58af22-43eb-4c31-9dcb-bdf8a11f443d", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8cffdcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f577029628cefd5d170a35a6cb2c78177ef6efced5895626e97a16b6df15782f", Pod:"calico-kube-controllers-cd8cffdcd-9vxkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali637451f3f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.764 [INFO][6107] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.764 [INFO][6107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" iface="eth0" netns="" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.764 [INFO][6107] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.764 [INFO][6107] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.806 [INFO][6115] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.807 [INFO][6115] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.807 [INFO][6115] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.820 [WARNING][6115] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.820 [INFO][6115] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" HandleID="k8s-pod-network.f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Workload="localhost-k8s-calico--kube--controllers--cd8cffdcd--9vxkx-eth0" Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.824 [INFO][6115] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.832140 containerd[1494]: 2026-03-04 01:05:31.828 [INFO][6107] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084" Mar 4 01:05:31.832851 containerd[1494]: time="2026-03-04T01:05:31.832201036Z" level=info msg="TearDown network for sandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" successfully" Mar 4 01:05:31.839780 containerd[1494]: time="2026-03-04T01:05:31.839670164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:31.839928 containerd[1494]: time="2026-03-04T01:05:31.839818502Z" level=info msg="RemovePodSandbox \"f77e5bbee4b25a5de2674f9ee3fafbc6e315a7c0dc9d1050fd207b2335d90084\" returns successfully" Mar 4 01:05:31.840870 containerd[1494]: time="2026-03-04T01:05:31.840837621Z" level=info msg="StopPodSandbox for \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\"" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.910 [WARNING][6133] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" WorkloadEndpoint="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.910 [INFO][6133] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.910 [INFO][6133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" iface="eth0" netns="" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.910 [INFO][6133] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.910 [INFO][6133] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.952 [INFO][6141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.952 [INFO][6141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.952 [INFO][6141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.961 [WARNING][6141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.962 [INFO][6141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.964 [INFO][6141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:31.971681 containerd[1494]: 2026-03-04 01:05:31.968 [INFO][6133] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:31.971681 containerd[1494]: time="2026-03-04T01:05:31.971440665Z" level=info msg="TearDown network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\" successfully" Mar 4 01:05:31.971681 containerd[1494]: time="2026-03-04T01:05:31.971471744Z" level=info msg="StopPodSandbox for \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\" returns successfully" Mar 4 01:05:31.973075 containerd[1494]: time="2026-03-04T01:05:31.973030900Z" level=info msg="RemovePodSandbox for \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\"" Mar 4 01:05:31.973318 containerd[1494]: time="2026-03-04T01:05:31.973125557Z" level=info msg="Forcibly stopping sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\"" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.056 [WARNING][6159] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" WorkloadEndpoint="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.057 [INFO][6159] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.057 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" iface="eth0" netns="" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.057 [INFO][6159] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.057 [INFO][6159] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.099 [INFO][6168] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.099 [INFO][6168] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.099 [INFO][6168] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.108 [WARNING][6168] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.108 [INFO][6168] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" HandleID="k8s-pod-network.e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Workload="localhost-k8s-whisker--674c968669--fd8l6-eth0" Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.111 [INFO][6168] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:32.119983 containerd[1494]: 2026-03-04 01:05:32.116 [INFO][6159] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55" Mar 4 01:05:32.119983 containerd[1494]: time="2026-03-04T01:05:32.119964080Z" level=info msg="TearDown network for sandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\" successfully" Mar 4 01:05:32.126474 containerd[1494]: time="2026-03-04T01:05:32.126418372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:32.126544 containerd[1494]: time="2026-03-04T01:05:32.126523357Z" level=info msg="RemovePodSandbox \"e2ed98d1ab6cf9ee33ba24fd763024bb07e62527130a18699bd75e41c692ec55\" returns successfully" Mar 4 01:05:32.127960 containerd[1494]: time="2026-03-04T01:05:32.127828847Z" level=info msg="StopPodSandbox for \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\"" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.195 [WARNING][6187] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4r8bd-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"33342b7d-98dc-47af-8bcb-0cd875c1acc0", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f", Pod:"coredns-7d764666f9-4r8bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b90fa9a69e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.196 [INFO][6187] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.196 [INFO][6187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" iface="eth0" netns="" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.196 [INFO][6187] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.196 [INFO][6187] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.231 [INFO][6195] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.232 [INFO][6195] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.232 [INFO][6195] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.242 [WARNING][6195] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.242 [INFO][6195] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.245 [INFO][6195] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:32.253303 containerd[1494]: 2026-03-04 01:05:32.248 [INFO][6187] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.253303 containerd[1494]: time="2026-03-04T01:05:32.253180321Z" level=info msg="TearDown network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\" successfully" Mar 4 01:05:32.253303 containerd[1494]: time="2026-03-04T01:05:32.253220075Z" level=info msg="StopPodSandbox for \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\" returns successfully" Mar 4 01:05:32.254119 containerd[1494]: time="2026-03-04T01:05:32.253986037Z" level=info msg="RemovePodSandbox for \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\"" Mar 4 01:05:32.254119 containerd[1494]: time="2026-03-04T01:05:32.254065446Z" level=info msg="Forcibly stopping sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\"" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.313 [WARNING][6213] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4r8bd-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"33342b7d-98dc-47af-8bcb-0cd875c1acc0", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7239647d954de0812968da78dbd1221bc7330e2ac405b2fdd4488ba3e18db39f", Pod:"coredns-7d764666f9-4r8bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b90fa9a69e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.313 [INFO][6213] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.313 [INFO][6213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" iface="eth0" netns="" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.313 [INFO][6213] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.313 [INFO][6213] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.358 [INFO][6221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.359 [INFO][6221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.359 [INFO][6221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.367 [WARNING][6221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.367 [INFO][6221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" HandleID="k8s-pod-network.d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Workload="localhost-k8s-coredns--7d764666f9--4r8bd-eth0" Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.370 [INFO][6221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:32.377846 containerd[1494]: 2026-03-04 01:05:32.373 [INFO][6213] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704" Mar 4 01:05:32.379018 containerd[1494]: time="2026-03-04T01:05:32.377885661Z" level=info msg="TearDown network for sandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\" successfully" Mar 4 01:05:32.383987 containerd[1494]: time="2026-03-04T01:05:32.383896770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:32.384075 containerd[1494]: time="2026-03-04T01:05:32.384007337Z" level=info msg="RemovePodSandbox \"d7b5df2a529d3095acf523415b77d52ee93bd9e122d4d143866a317103223704\" returns successfully" Mar 4 01:05:32.384904 containerd[1494]: time="2026-03-04T01:05:32.384821059Z" level=info msg="StopPodSandbox for \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\"" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.441 [WARNING][6240] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"be51fa33-3cc1-4ec8-b66b-77876e60bb47", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4", Pod:"calico-apiserver-7d448765fc-92hmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib5c16ead2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.442 [INFO][6240] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.442 [INFO][6240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" iface="eth0" netns="" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.442 [INFO][6240] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.442 [INFO][6240] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.475 [INFO][6248] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.475 [INFO][6248] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.475 [INFO][6248] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.486 [WARNING][6248] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.486 [INFO][6248] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.490 [INFO][6248] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:32.498052 containerd[1494]: 2026-03-04 01:05:32.493 [INFO][6240] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.499234 containerd[1494]: time="2026-03-04T01:05:32.498383438Z" level=info msg="TearDown network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\" successfully" Mar 4 01:05:32.499234 containerd[1494]: time="2026-03-04T01:05:32.498426769Z" level=info msg="StopPodSandbox for \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\" returns successfully" Mar 4 01:05:32.499378 containerd[1494]: time="2026-03-04T01:05:32.499347551Z" level=info msg="RemovePodSandbox for \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\"" Mar 4 01:05:32.499423 containerd[1494]: time="2026-03-04T01:05:32.499385752Z" level=info msg="Forcibly stopping sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\"" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.570 [WARNING][6266] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0", GenerateName:"calico-apiserver-7d448765fc-", Namespace:"calico-system", SelfLink:"", UID:"be51fa33-3cc1-4ec8-b66b-77876e60bb47", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d448765fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c05855edd6627c8a2465dd186b9e58dfd1075b5fbb342ca786ca48bd881d51d4", Pod:"calico-apiserver-7d448765fc-92hmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib5c16ead2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.570 [INFO][6266] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.571 [INFO][6266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" iface="eth0" netns="" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.571 [INFO][6266] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.571 [INFO][6266] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.610 [INFO][6275] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.611 [INFO][6275] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.611 [INFO][6275] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.620 [WARNING][6275] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.620 [INFO][6275] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" HandleID="k8s-pod-network.6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Workload="localhost-k8s-calico--apiserver--7d448765fc--92hmt-eth0" Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.623 [INFO][6275] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:05:32.631621 containerd[1494]: 2026-03-04 01:05:32.627 [INFO][6266] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb" Mar 4 01:05:32.631621 containerd[1494]: time="2026-03-04T01:05:32.631422767Z" level=info msg="TearDown network for sandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\" successfully" Mar 4 01:05:32.639962 containerd[1494]: time="2026-03-04T01:05:32.639794455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:05:32.639962 containerd[1494]: time="2026-03-04T01:05:32.639868403Z" level=info msg="RemovePodSandbox \"6dd4fe58fbc41717e6616f9f6646d3f565fc7da31b0f48264eecea3daecf1cbb\" returns successfully" Mar 4 01:05:33.568927 systemd[1]: run-containerd-runc-k8s.io-a5b77983ac68a7f86803a8379efc167c1fce8d641d4c604657cb534079971ec1-runc.lh3ffz.mount: Deactivated successfully. Mar 4 01:05:33.839507 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:49624.service - OpenSSH per-connection server daemon (10.0.0.1:49624). Mar 4 01:05:33.930797 sshd[6327]: Accepted publickey for core from 10.0.0.1 port 49624 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:33.933516 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:33.941236 systemd-logind[1464]: New session 19 of user core. Mar 4 01:05:33.948839 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:05:34.116950 sshd[6327]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:34.132257 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:49624.service: Deactivated successfully. Mar 4 01:05:34.134859 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:05:34.137513 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:05:34.157251 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:49628.service - OpenSSH per-connection server daemon (10.0.0.1:49628). Mar 4 01:05:34.159024 systemd-logind[1464]: Removed session 19. Mar 4 01:05:34.194256 sshd[6341]: Accepted publickey for core from 10.0.0.1 port 49628 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:34.196436 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:34.202821 systemd-logind[1464]: New session 20 of user core. Mar 4 01:05:34.211855 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:05:34.618484 sshd[6341]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:34.633974 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:49628.service: Deactivated successfully. Mar 4 01:05:34.636909 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:05:34.639477 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:05:34.658425 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:49636.service - OpenSSH per-connection server daemon (10.0.0.1:49636). Mar 4 01:05:34.660082 systemd-logind[1464]: Removed session 20. Mar 4 01:05:34.699200 sshd[6353]: Accepted publickey for core from 10.0.0.1 port 49636 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:34.701905 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:34.710130 systemd-logind[1464]: New session 21 of user core. Mar 4 01:05:34.717843 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:05:35.328751 sshd[6353]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:35.344539 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:49636.service: Deactivated successfully. Mar 4 01:05:35.350078 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:05:35.355727 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:05:35.366118 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:49652.service - OpenSSH per-connection server daemon (10.0.0.1:49652). Mar 4 01:05:35.369466 systemd-logind[1464]: Removed session 21. Mar 4 01:05:35.484171 sshd[6378]: Accepted publickey for core from 10.0.0.1 port 49652 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:35.488978 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:35.496645 systemd-logind[1464]: New session 22 of user core. Mar 4 01:05:35.505907 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:05:35.913175 sshd[6378]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:35.931120 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:49652.service: Deactivated successfully. Mar 4 01:05:35.934836 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:05:35.938428 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:05:35.951643 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:49666.service - OpenSSH per-connection server daemon (10.0.0.1:49666). Mar 4 01:05:35.954271 systemd-logind[1464]: Removed session 22. Mar 4 01:05:36.000368 sshd[6392]: Accepted publickey for core from 10.0.0.1 port 49666 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:36.003246 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:36.012089 systemd-logind[1464]: New session 23 of user core. Mar 4 01:05:36.023091 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:05:36.204418 sshd[6392]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:36.210905 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:49666.service: Deactivated successfully. Mar 4 01:05:36.214977 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:05:36.218108 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:05:36.221249 systemd-logind[1464]: Removed session 23. Mar 4 01:05:41.220223 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). Mar 4 01:05:41.332652 sshd[6437]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:41.334957 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:41.341844 systemd-logind[1464]: New session 24 of user core. Mar 4 01:05:41.350833 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:05:41.529743 sshd[6437]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:41.536142 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:49674.service: Deactivated successfully. Mar 4 01:05:41.539178 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:05:41.540981 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:05:41.542699 systemd-logind[1464]: Removed session 24. Mar 4 01:05:42.559109 kubelet[2586]: E0304 01:05:42.558655 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:46.564546 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:42756.service - OpenSSH per-connection server daemon (10.0.0.1:42756). Mar 4 01:05:46.599446 sshd[6454]: Accepted publickey for core from 10.0.0.1 port 42756 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:46.601431 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:46.607357 systemd-logind[1464]: New session 25 of user core. Mar 4 01:05:46.615034 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:05:46.758804 sshd[6454]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:46.763987 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:42756.service: Deactivated successfully. Mar 4 01:05:46.766880 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:05:46.768732 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:05:46.770404 systemd-logind[1464]: Removed session 25. Mar 4 01:05:48.543968 kubelet[2586]: E0304 01:05:48.543823 2586 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:51.783552 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:42768.service - OpenSSH per-connection server daemon (10.0.0.1:42768). Mar 4 01:05:51.828956 sshd[6487]: Accepted publickey for core from 10.0.0.1 port 42768 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:51.832034 sshd[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:51.839043 systemd-logind[1464]: New session 26 of user core. Mar 4 01:05:51.852875 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:05:52.022006 sshd[6487]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:52.028116 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:42768.service: Deactivated successfully. Mar 4 01:05:52.031496 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:05:52.033114 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:05:52.035394 systemd-logind[1464]: Removed session 26.