Mar 7 02:08:59.095132 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 02:08:59.095166 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:08:59.095233 kernel: BIOS-provided physical RAM map: Mar 7 02:08:59.095242 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 02:08:59.095250 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 7 02:08:59.095259 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 7 02:08:59.095271 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 7 02:08:59.095279 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 7 02:08:59.095285 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 7 02:08:59.095290 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 7 02:08:59.095298 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 7 02:08:59.095304 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 7 02:08:59.095309 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 7 02:08:59.095315 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 7 02:08:59.095322 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 7 02:08:59.095327 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 7 02:08:59.095336 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 7 02:08:59.095341 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 7 02:08:59.095347 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 7 02:08:59.095353 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 02:08:59.095359 kernel: NX (Execute Disable) protection: active Mar 7 02:08:59.095365 kernel: APIC: Static calls initialized Mar 7 02:08:59.095370 kernel: efi: EFI v2.7 by EDK II Mar 7 02:08:59.095376 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 7 02:08:59.095382 kernel: SMBIOS 2.8 present. Mar 7 02:08:59.095388 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 7 02:08:59.095394 kernel: Hypervisor detected: KVM Mar 7 02:08:59.095402 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 02:08:59.095408 kernel: kvm-clock: using sched offset of 5666301705 cycles Mar 7 02:08:59.095414 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 02:08:59.095420 kernel: tsc: Detected 2445.426 MHz processor Mar 7 02:08:59.095426 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 02:08:59.095432 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 02:08:59.095439 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 7 02:08:59.095445 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 02:08:59.095451 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 02:08:59.095459 kernel: Using GB pages for direct mapping Mar 7 02:08:59.095465 kernel: Secure boot disabled Mar 7 02:08:59.095471 kernel: ACPI: Early table checksum verification disabled Mar 7 02:08:59.095477 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 7 02:08:59.095487 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 7 02:08:59.095493 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:08:59.095500 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:08:59.095508 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 7 02:08:59.095515 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:08:59.095521 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:08:59.095527 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:08:59.095533 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:08:59.095540 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 02:08:59.095546 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 7 02:08:59.095555 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 7 02:08:59.095561 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 7 02:08:59.095567 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 7 02:08:59.095574 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 7 02:08:59.095580 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 7 02:08:59.095586 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 7 02:08:59.095592 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 7 02:08:59.095598 kernel: No NUMA configuration found Mar 7 02:08:59.095605 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 7 02:08:59.095613 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 7 02:08:59.095620 kernel: Zone ranges: Mar 7 02:08:59.095626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 02:08:59.095632 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 7 02:08:59.095638 kernel: Normal empty Mar 7 02:08:59.095644 kernel: Movable zone start for each node Mar 7 02:08:59.095651 kernel: Early memory node ranges Mar 7 02:08:59.095657 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 02:08:59.095663 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 7 02:08:59.095669 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 7 02:08:59.095678 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 7 02:08:59.095684 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 7 02:08:59.095691 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 7 02:08:59.095697 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 7 02:08:59.095703 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 02:08:59.095709 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 02:08:59.095715 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 7 02:08:59.095721 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 02:08:59.095727 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 7 02:08:59.095736 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 02:08:59.095742 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 7 02:08:59.095749 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 02:08:59.095755 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 02:08:59.095761 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 02:08:59.095767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 02:08:59.095773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 02:08:59.095780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 02:08:59.095786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 02:08:59.095794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 02:08:59.095801 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 02:08:59.095807 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 02:08:59.095813 kernel: TSC deadline timer available Mar 7 02:08:59.095819 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 02:08:59.095825 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 02:08:59.095831 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 02:08:59.095838 kernel: kvm-guest: setup PV sched yield Mar 7 02:08:59.095844 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 02:08:59.095850 kernel: Booting paravirtualized kernel on KVM Mar 7 02:08:59.095859 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 02:08:59.095865 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 02:08:59.095871 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 02:08:59.095911 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 02:08:59.095919 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 02:08:59.095925 kernel: kvm-guest: PV spinlocks enabled Mar 7 02:08:59.095932 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 02:08:59.095939 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:08:59.095949 kernel: random: crng init done Mar 7 02:08:59.095961 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 02:08:59.095973 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 02:08:59.095985 kernel: Fallback order for Node 0: 0 Mar 7 02:08:59.095995 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 7 02:08:59.096008 kernel: Policy zone: DMA32 Mar 7 02:08:59.096018 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 02:08:59.096032 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 7 02:08:59.096042 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 02:08:59.096059 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 02:08:59.096072 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 02:08:59.096083 kernel: Dynamic Preempt: voluntary Mar 7 02:08:59.096095 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 02:08:59.096122 kernel: rcu: RCU event tracing is enabled. Mar 7 02:08:59.096139 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 02:08:59.096151 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 02:08:59.096165 kernel: Rude variant of Tasks RCU enabled. Mar 7 02:08:59.096281 kernel: Tracing variant of Tasks RCU enabled. Mar 7 02:08:59.096295 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 02:08:59.096310 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 02:08:59.096329 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 02:08:59.096341 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 02:08:59.096352 kernel: Console: colour dummy device 80x25 Mar 7 02:08:59.096364 kernel: printk: console [ttyS0] enabled Mar 7 02:08:59.096375 kernel: ACPI: Core revision 20230628 Mar 7 02:08:59.096389 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 02:08:59.096407 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 02:08:59.096419 kernel: x2apic enabled Mar 7 02:08:59.096432 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 02:08:59.096444 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 02:08:59.096457 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 02:08:59.096471 kernel: kvm-guest: setup PV IPIs Mar 7 02:08:59.096484 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 02:08:59.096496 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 02:08:59.096508 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 02:08:59.096527 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 02:08:59.096539 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 02:08:59.096551 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 02:08:59.096563 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 02:08:59.096574 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 02:08:59.096586 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 02:08:59.096597 kernel: Speculative Store Bypass: Vulnerable Mar 7 02:08:59.096609 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 02:08:59.096623 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 02:08:59.096639 kernel: active return thunk: srso_alias_return_thunk Mar 7 02:08:59.096650 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 02:08:59.096662 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 02:08:59.096672 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 02:08:59.096685 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 02:08:59.096697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 02:08:59.096708 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 02:08:59.096718 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 02:08:59.096733 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 02:08:59.096744 kernel: Freeing SMP alternatives memory: 32K Mar 7 02:08:59.096755 kernel: pid_max: default: 32768 minimum: 301 Mar 7 02:08:59.096767 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 02:08:59.096779 kernel: landlock: Up and running. Mar 7 02:08:59.096792 kernel: SELinux: Initializing. Mar 7 02:08:59.096804 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:08:59.096815 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:08:59.096826 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 02:08:59.096841 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:08:59.096852 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:08:59.096862 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:08:59.096875 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 02:08:59.096938 kernel: signal: max sigframe size: 1776 Mar 7 02:08:59.096950 kernel: rcu: Hierarchical SRCU implementation. Mar 7 02:08:59.096963 kernel: rcu: Max phase no-delay instances is 400. Mar 7 02:08:59.096975 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 02:08:59.096989 kernel: smp: Bringing up secondary CPUs ... Mar 7 02:08:59.097007 kernel: smpboot: x86: Booting SMP configuration: Mar 7 02:08:59.097019 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 02:08:59.097031 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 02:08:59.097043 kernel: smpboot: Max logical packages: 1 Mar 7 02:08:59.097055 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 02:08:59.097068 kernel: devtmpfs: initialized Mar 7 02:08:59.097080 kernel: x86/mm: Memory block size: 128MB Mar 7 02:08:59.097092 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 7 02:08:59.097104 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 7 02:08:59.097122 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 7 02:08:59.097133 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 7 02:08:59.097145 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 7 02:08:59.097159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 02:08:59.097327 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 02:08:59.097345 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 02:08:59.097359 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 02:08:59.097371 kernel: audit: initializing netlink subsys (disabled) Mar 7 02:08:59.097382 kernel: audit: type=2000 audit(1772849337.796:1): state=initialized audit_enabled=0 res=1 Mar 7 02:08:59.097399 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 02:08:59.097410 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 02:08:59.097423 kernel: cpuidle: using governor menu Mar 7 02:08:59.097435 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 02:08:59.097447 kernel: dca service started, version 1.12.1 Mar 7 02:08:59.097460 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 02:08:59.097472 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 02:08:59.097484 kernel: PCI: Using configuration type 1 for base access Mar 7 02:08:59.097497 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 02:08:59.097515 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 02:08:59.097528 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 02:08:59.097539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 02:08:59.097552 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 02:08:59.097565 kernel: ACPI: Added _OSI(Module Device) Mar 7 02:08:59.097578 kernel: ACPI: Added _OSI(Processor Device) Mar 7 02:08:59.097591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 02:08:59.097603 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 02:08:59.097614 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 02:08:59.097629 kernel: ACPI: Interpreter enabled Mar 7 02:08:59.097640 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 02:08:59.097652 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 02:08:59.097663 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 02:08:59.097670 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 02:08:59.097677 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 02:08:59.097684 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 02:08:59.097869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 02:08:59.098057 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 02:08:59.098246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 02:08:59.098258 kernel: PCI host bridge to bus 0000:00 Mar 7 02:08:59.098481 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 02:08:59.098672 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 02:08:59.098856 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 02:08:59.099018 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 02:08:59.099137 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 02:08:59.099297 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 7 02:08:59.099411 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 02:08:59.099548 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 02:08:59.099678 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 02:08:59.099800 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 7 02:08:59.099964 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 7 02:08:59.100086 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 02:08:59.100262 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 7 02:08:59.100422 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 02:08:59.100642 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 02:08:59.100821 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 7 02:08:59.100989 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 7 02:08:59.101153 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 7 02:08:59.101438 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 02:08:59.101614 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 7 02:08:59.101815 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 7 02:08:59.102052 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 7 02:08:59.102315 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 02:08:59.102490 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 7 02:08:59.106362 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 7 02:08:59.106532 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 7 02:08:59.106659 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 7 02:08:59.106791 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 02:08:59.106967 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 02:08:59.107099 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 02:08:59.107296 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 7 02:08:59.107419 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 7 02:08:59.107547 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 02:08:59.107668 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 7 02:08:59.107678 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 02:08:59.107685 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 02:08:59.107692 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 02:08:59.107698 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 02:08:59.107709 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 02:08:59.107716 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 02:08:59.107722 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 02:08:59.107729 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 02:08:59.107735 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 02:08:59.107742 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 02:08:59.107749 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 02:08:59.107755 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 02:08:59.107762 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 02:08:59.107771 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 02:08:59.107777 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 02:08:59.107784 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 02:08:59.107790 kernel: iommu: Default domain type: Translated Mar 7 02:08:59.107797 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 02:08:59.107804 kernel: efivars: Registered efivars operations Mar 7 02:08:59.107810 kernel: PCI: Using ACPI for IRQ routing Mar 7 02:08:59.107817 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 02:08:59.107824 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 7 02:08:59.107833 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 7 02:08:59.107840 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 7 02:08:59.107846 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 7 02:08:59.108007 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 02:08:59.108128 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 02:08:59.108300 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 02:08:59.108311 kernel: vgaarb: loaded Mar 7 02:08:59.108319 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 02:08:59.108325 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 02:08:59.108336 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 02:08:59.108343 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 02:08:59.108350 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 02:08:59.108357 kernel: pnp: PnP ACPI init Mar 7 02:08:59.108489 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 02:08:59.108500 kernel: pnp: PnP ACPI: found 6 devices Mar 7 02:08:59.108507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 02:08:59.108513 kernel: NET: Registered PF_INET protocol family Mar 7 02:08:59.108523 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 02:08:59.108530 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 02:08:59.108537 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 02:08:59.108544 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 02:08:59.108550 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 02:08:59.108557 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 02:08:59.108564 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:08:59.108570 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:08:59.108577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 02:08:59.108586 kernel: NET: Registered PF_XDP protocol family Mar 7 02:08:59.108706 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 7 02:08:59.108827 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 7 02:08:59.108988 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 02:08:59.109101 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 02:08:59.109259 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 02:08:59.109372 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 02:08:59.109487 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 02:08:59.109597 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 7 02:08:59.109606 kernel: PCI: CLS 0 bytes, default 64 Mar 7 02:08:59.109613 kernel: Initialise system trusted keyrings Mar 7 02:08:59.109620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 02:08:59.109627 kernel: Key type asymmetric registered Mar 7 02:08:59.109633 kernel: Asymmetric key parser 'x509' registered Mar 7 02:08:59.109640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 02:08:59.109647 kernel: io scheduler mq-deadline registered Mar 7 02:08:59.109657 kernel: io scheduler kyber registered Mar 7 02:08:59.109664 kernel: io scheduler bfq registered Mar 7 02:08:59.109670 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 02:08:59.109678 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 02:08:59.109684 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 02:08:59.109691 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 02:08:59.109698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 02:08:59.109705 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 02:08:59.109711 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 02:08:59.109720 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 02:08:59.109727 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 02:08:59.109855 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 02:08:59.109865 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 02:08:59.110020 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 02:08:59.110136 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T02:08:58 UTC (1772849338) Mar 7 02:08:59.110296 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 02:08:59.110308 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 02:08:59.110319 kernel: efifb: probing for efifb Mar 7 02:08:59.110325 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 7 02:08:59.110332 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 7 02:08:59.110339 kernel: efifb: scrolling: redraw Mar 7 02:08:59.110345 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 7 02:08:59.110352 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 02:08:59.110359 kernel: fb0: EFI VGA frame buffer device Mar 7 02:08:59.110366 kernel: pstore: Using crash dump compression: deflate Mar 7 02:08:59.110372 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 02:08:59.110381 kernel: NET: Registered PF_INET6 protocol family Mar 7 02:08:59.110388 kernel: Segment Routing with IPv6 Mar 7 02:08:59.110394 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 02:08:59.110401 kernel: NET: Registered PF_PACKET protocol family Mar 7 02:08:59.110411 kernel: Key type dns_resolver registered Mar 7 02:08:59.110425 kernel: IPI shorthand broadcast: enabled Mar 7 02:08:59.110517 kernel: sched_clock: Marking stable (1164019649, 412737451)->(1747636170, -170879070) Mar 7 02:08:59.110560 kernel: registered taskstats version 1 Mar 7 02:08:59.110573 kernel: Loading compiled-in X.509 certificates Mar 7 02:08:59.110602 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 02:08:59.110625 kernel: Key type .fscrypt registered Mar 7 02:08:59.110632 kernel: Key type fscrypt-provisioning registered Mar 7 02:08:59.110639 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 02:08:59.110646 kernel: ima: Allocated hash algorithm: sha1 Mar 7 02:08:59.110653 kernel: ima: No architecture policies found Mar 7 02:08:59.110674 kernel: clk: Disabling unused clocks Mar 7 02:08:59.110681 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 02:08:59.110688 kernel: Write protecting the kernel read-only data: 36864k Mar 7 02:08:59.110698 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 02:08:59.110705 kernel: Run /init as init process Mar 7 02:08:59.110726 kernel: with arguments: Mar 7 02:08:59.110748 kernel: /init Mar 7 02:08:59.110756 kernel: with environment: Mar 7 02:08:59.110763 kernel: HOME=/ Mar 7 02:08:59.110770 kernel: TERM=linux Mar 7 02:08:59.110779 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:08:59.110792 systemd[1]: Detected virtualization kvm. Mar 7 02:08:59.110800 systemd[1]: Detected architecture x86-64. Mar 7 02:08:59.110807 systemd[1]: Running in initrd. Mar 7 02:08:59.110814 systemd[1]: No hostname configured, using default hostname. Mar 7 02:08:59.110821 systemd[1]: Hostname set to . Mar 7 02:08:59.110829 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:08:59.110836 systemd[1]: Queued start job for default target initrd.target. Mar 7 02:08:59.110843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:08:59.110853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:08:59.110861 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 02:08:59.110869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:08:59.110903 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 02:08:59.110915 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 02:08:59.110929 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 02:08:59.110936 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 02:08:59.110944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:08:59.110951 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:08:59.110958 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:08:59.110966 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:08:59.110976 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:08:59.110983 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:08:59.110991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:08:59.110998 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:08:59.111006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 02:08:59.111013 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 02:08:59.111020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:08:59.111028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:08:59.111035 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:08:59.111045 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:08:59.111052 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 02:08:59.111060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:08:59.111067 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 02:08:59.114274 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 02:08:59.114306 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:08:59.114315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:08:59.114323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:08:59.114330 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 02:08:59.114343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:08:59.114350 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 02:08:59.114382 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 02:08:59.114404 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:08:59.114412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:08:59.114420 systemd-journald[194]: Journal started Mar 7 02:08:59.114438 systemd-journald[194]: Runtime Journal (/run/log/journal/222863790ab442828d217efc8b6039b8) is 6.0M, max 48.3M, 42.2M free. Mar 7 02:08:59.091446 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 02:08:59.120774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:08:59.128551 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:08:59.138245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 02:08:59.141474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:08:59.144652 kernel: Bridge firewalling registered Mar 7 02:08:59.142165 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 02:08:59.144839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:08:59.150355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:08:59.161388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:08:59.164096 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 02:08:59.173705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:08:59.182362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:08:59.189389 dracut-cmdline[222]: dracut-dracut-053 Mar 7 02:08:59.191529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:08:59.192811 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:08:59.218409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:08:59.226527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:08:59.236386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:08:59.280807 systemd-resolved[278]: Positive Trust Anchors: Mar 7 02:08:59.285007 kernel: SCSI subsystem initialized Mar 7 02:08:59.280852 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:08:59.280931 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:08:59.283965 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 7 02:08:59.285556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:08:59.292625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:08:59.329223 kernel: Loading iSCSI transport class v2.0-870. Mar 7 02:08:59.343271 kernel: iscsi: registered transport (tcp) Mar 7 02:08:59.367308 kernel: iscsi: registered transport (qla4xxx) Mar 7 02:08:59.367379 kernel: QLogic iSCSI HBA Driver Mar 7 02:08:59.416216 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 02:08:59.442356 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 02:08:59.473074 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 02:08:59.473111 kernel: device-mapper: uevent: version 1.0.3 Mar 7 02:08:59.476247 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 02:08:59.520253 kernel: raid6: avx2x4 gen() 33635 MB/s Mar 7 02:08:59.538250 kernel: raid6: avx2x2 gen() 31239 MB/s Mar 7 02:08:59.557489 kernel: raid6: avx2x1 gen() 25841 MB/s Mar 7 02:08:59.557525 kernel: raid6: using algorithm avx2x4 gen() 33635 MB/s Mar 7 02:08:59.577482 kernel: raid6: .... xor() 4992 MB/s, rmw enabled Mar 7 02:08:59.577535 kernel: raid6: using avx2x2 recovery algorithm Mar 7 02:08:59.598246 kernel: xor: automatically using best checksumming function avx Mar 7 02:08:59.743256 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 02:08:59.756999 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:08:59.774478 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:08:59.789608 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 7 02:08:59.796430 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:08:59.810421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 02:08:59.823914 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 7 02:08:59.860257 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:08:59.877352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:08:59.980331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:08:59.998439 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 02:09:00.013289 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 02:09:00.020248 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:09:00.027666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:09:00.037118 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:09:00.052340 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 02:09:00.052614 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 02:09:00.063792 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 02:09:00.068511 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 02:09:00.089233 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 02:09:00.089282 kernel: GPT:9289727 != 19775487 Mar 7 02:09:00.089294 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 02:09:00.089303 kernel: libata version 3.00 loaded. Mar 7 02:09:00.089314 kernel: GPT:9289727 != 19775487 Mar 7 02:09:00.089323 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 02:09:00.089332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:09:00.088532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:09:00.088674 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:09:00.095513 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:09:00.109691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:09:00.113259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:09:00.124244 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 02:09:00.124275 kernel: AES CTR mode by8 optimization enabled Mar 7 02:09:00.119805 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:09:00.133411 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 02:09:00.138286 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 02:09:00.142102 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:09:00.147812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:09:00.175859 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 02:09:00.176823 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 02:09:00.177015 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (461) Mar 7 02:09:00.177028 kernel: scsi host0: ahci Mar 7 02:09:00.177238 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Mar 7 02:09:00.175694 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 02:09:00.188442 kernel: scsi host1: ahci Mar 7 02:09:00.191400 kernel: scsi host2: ahci Mar 7 02:09:00.195081 kernel: scsi host3: ahci Mar 7 02:09:00.195462 kernel: scsi host4: ahci Mar 7 02:09:00.195627 kernel: scsi host5: ahci Mar 7 02:09:00.197714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:09:00.222679 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 7 02:09:00.222721 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 7 02:09:00.222738 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 7 02:09:00.222753 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 7 02:09:00.222768 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 7 02:09:00.222784 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 7 02:09:00.227522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 02:09:00.242569 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 02:09:00.246294 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 02:09:00.259050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:09:00.277445 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 02:09:00.283955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:09:00.295030 disk-uuid[560]: Primary Header is updated. Mar 7 02:09:00.295030 disk-uuid[560]: Secondary Entries is updated. Mar 7 02:09:00.295030 disk-uuid[560]: Secondary Header is updated. Mar 7 02:09:00.301771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:09:00.306268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:09:00.308817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:09:00.517267 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 02:09:00.525232 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 02:09:00.525302 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 02:09:00.531302 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 02:09:00.531348 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 02:09:00.534249 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 02:09:00.536260 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 02:09:00.538950 kernel: ata3.00: applying bridge limits Mar 7 02:09:00.541280 kernel: ata3.00: configured for UDMA/100 Mar 7 02:09:00.545221 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 02:09:00.600004 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 02:09:00.600428 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 02:09:00.617244 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 02:09:01.314260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:09:01.314757 disk-uuid[561]: The operation has completed successfully. Mar 7 02:09:01.351348 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 02:09:01.351499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 02:09:01.377353 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 02:09:01.389824 sh[598]: Success Mar 7 02:09:01.408250 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 02:09:01.454581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 02:09:01.479738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 02:09:01.487682 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 02:09:01.504438 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 02:09:01.504466 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:09:01.504477 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 02:09:01.508160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 02:09:01.510805 kernel: BTRFS info (device dm-0): using free space tree Mar 7 02:09:01.522060 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 02:09:01.525275 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 02:09:01.538548 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 02:09:01.545625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 02:09:01.566300 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:09:01.566325 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:09:01.566336 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:09:01.566346 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:09:01.576520 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 02:09:01.582745 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:09:01.590318 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 02:09:01.607489 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 02:09:01.671656 ignition[692]: Ignition 2.19.0 Mar 7 02:09:01.671688 ignition[692]: Stage: fetch-offline Mar 7 02:09:01.671735 ignition[692]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:09:01.671746 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:09:01.671964 ignition[692]: parsed url from cmdline: "" Mar 7 02:09:01.671974 ignition[692]: no config URL provided Mar 7 02:09:01.671985 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 02:09:01.672023 ignition[692]: no config at "/usr/lib/ignition/user.ign" Mar 7 02:09:01.672060 ignition[692]: op(1): [started] loading QEMU firmware config module Mar 7 02:09:01.672066 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 02:09:01.686036 ignition[692]: op(1): [finished] loading QEMU firmware config module Mar 7 02:09:01.723149 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:09:01.744376 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:09:01.769360 systemd-networkd[786]: lo: Link UP Mar 7 02:09:01.769386 systemd-networkd[786]: lo: Gained carrier Mar 7 02:09:01.771151 systemd-networkd[786]: Enumeration completed Mar 7 02:09:01.771986 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:09:01.772105 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:09:01.772110 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:09:01.773223 systemd-networkd[786]: eth0: Link UP Mar 7 02:09:01.773227 systemd-networkd[786]: eth0: Gained carrier Mar 7 02:09:01.773234 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:09:01.777271 systemd[1]: Reached target network.target - Network. Mar 7 02:09:01.801236 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:09:01.902828 ignition[692]: parsing config with SHA512: bc3f7e8a3b0cc2198c84baf4a0fd599bed4ab9138795e4187c54543669a23f7ac8f8516a4744d333d12a1040adc0e2979ad9a0e9aaeaa46743ec630846f9cf83 Mar 7 02:09:01.906611 unknown[692]: fetched base config from "system" Mar 7 02:09:01.907030 ignition[692]: fetch-offline: fetch-offline passed Mar 7 02:09:01.906629 unknown[692]: fetched user config from "qemu" Mar 7 02:09:01.907092 ignition[692]: Ignition finished successfully Mar 7 02:09:01.921115 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:09:01.924514 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 02:09:01.936603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 02:09:01.958867 ignition[790]: Ignition 2.19.0 Mar 7 02:09:01.958875 ignition[790]: Stage: kargs Mar 7 02:09:01.959082 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:09:01.959094 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:09:01.960059 ignition[790]: kargs: kargs passed Mar 7 02:09:01.960109 ignition[790]: Ignition finished successfully Mar 7 02:09:01.974842 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 02:09:01.993468 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 02:09:02.012800 ignition[798]: Ignition 2.19.0 Mar 7 02:09:02.012828 ignition[798]: Stage: disks Mar 7 02:09:02.013015 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:09:02.013027 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:09:02.013724 ignition[798]: disks: disks passed Mar 7 02:09:02.013765 ignition[798]: Ignition finished successfully Mar 7 02:09:02.030599 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 02:09:02.032064 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 02:09:02.038963 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 02:09:02.046146 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:09:02.054879 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:09:02.063872 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:09:02.082367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 02:09:02.102600 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 02:09:02.110445 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 02:09:02.128394 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 02:09:02.237232 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 02:09:02.237682 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 02:09:02.240944 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 02:09:02.257623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:09:02.270476 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Mar 7 02:09:02.261392 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 02:09:02.286635 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:09:02.286656 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:09:02.286668 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:09:02.272658 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 02:09:02.300320 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:09:02.272708 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 02:09:02.272733 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:09:02.286657 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 02:09:02.291379 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 02:09:02.297612 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:09:02.341663 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 02:09:02.348377 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 7 02:09:02.356937 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 02:09:02.366610 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 02:09:02.487648 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 02:09:02.496419 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 02:09:02.499588 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 02:09:02.518566 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 02:09:02.524064 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:09:02.543591 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 02:09:02.554440 ignition[929]: INFO : Ignition 2.19.0 Mar 7 02:09:02.554440 ignition[929]: INFO : Stage: mount Mar 7 02:09:02.561412 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:09:02.561412 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:09:02.561412 ignition[929]: INFO : mount: mount passed Mar 7 02:09:02.561412 ignition[929]: INFO : Ignition finished successfully Mar 7 02:09:02.557581 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 02:09:02.573341 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 02:09:02.581880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:09:02.599868 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 7 02:09:02.599931 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:09:02.599945 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:09:02.602356 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:09:02.609239 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:09:02.610987 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:09:02.652604 ignition[959]: INFO : Ignition 2.19.0 Mar 7 02:09:02.652604 ignition[959]: INFO : Stage: files Mar 7 02:09:02.658307 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:09:02.658307 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:09:02.658307 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 7 02:09:02.658307 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 02:09:02.658307 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 02:09:02.658307 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 02:09:02.658307 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 02:09:02.695928 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 02:09:02.695928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:09:02.695928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 02:09:02.658875 unknown[959]: wrote ssh authorized keys file for user: core Mar 7 02:09:02.735271 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 02:09:02.826115 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:09:02.826115 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 02:09:02.840264 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 02:09:02.840264 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:09:02.853782 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:09:02.853782 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:09:02.867476 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:09:02.874473 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:09:02.880446 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:09:02.886461 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:09:02.892601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:09:02.892601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 02:09:02.892601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 02:09:02.892601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 02:09:02.892601 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 02:09:03.245797 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 02:09:03.475485 systemd-networkd[786]: eth0: Gained IPv6LL Mar 7 02:09:03.989787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 02:09:03.996153 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 02:09:03.999878 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:09:04.005637 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:09:04.005637 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 02:09:04.005637 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 02:09:04.005637 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:09:04.023366 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:09:04.023366 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 02:09:04.032111 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 02:09:04.062452 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:09:04.072091 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:09:04.076319 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 02:09:04.076319 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 02:09:04.076319 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 02:09:04.076319 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:09:04.076319 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:09:04.076319 ignition[959]: INFO : files: files passed Mar 7 02:09:04.076319 ignition[959]: INFO : Ignition finished successfully Mar 7 02:09:04.081249 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 02:09:04.109439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 02:09:04.111967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 02:09:04.128617 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 02:09:04.128620 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 02:09:04.128802 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 02:09:04.143932 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:09:04.148497 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:09:04.152803 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:09:04.159762 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:09:04.167504 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 02:09:04.179402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 02:09:04.205071 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 02:09:04.205307 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 02:09:04.211277 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 02:09:04.217078 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 02:09:04.222868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 02:09:04.228355 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 02:09:04.249951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:09:04.261504 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 02:09:04.273231 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:09:04.276565 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:09:04.282942 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 02:09:04.288849 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 02:09:04.289003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:09:04.296157 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 02:09:04.301689 systemd[1]: Stopped target basic.target - Basic System. Mar 7 02:09:04.308311 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 02:09:04.314720 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:09:04.320770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 02:09:04.325125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 02:09:04.332446 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:09:04.336121 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 02:09:04.342865 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 02:09:04.349385 systemd[1]: Stopped target swap.target - Swaps. Mar 7 02:09:04.356386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 02:09:04.356521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:09:04.364978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:09:04.371361 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:09:04.378675 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 02:09:04.378994 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:09:04.385527 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 02:09:04.385677 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 02:09:04.392695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 02:09:04.392826 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:09:04.400140 systemd[1]: Stopped target paths.target - Path Units. Mar 7 02:09:04.468499 ignition[1015]: INFO : Ignition 2.19.0 Mar 7 02:09:04.468499 ignition[1015]: INFO : Stage: umount Mar 7 02:09:04.468499 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:09:04.468499 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:09:04.405940 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 02:09:04.501470 ignition[1015]: INFO : umount: umount passed Mar 7 02:09:04.501470 ignition[1015]: INFO : Ignition finished successfully Mar 7 02:09:04.409287 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:09:04.413445 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 02:09:04.414885 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 02:09:04.416085 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 02:09:04.416282 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:09:04.416598 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 02:09:04.416730 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:09:04.417158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 02:09:04.417333 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:09:04.417759 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 02:09:04.417992 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 02:09:04.448534 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 02:09:04.457463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 02:09:04.460849 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 02:09:04.463428 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:09:04.468659 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 02:09:04.468859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:09:04.480035 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 02:09:04.480273 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 02:09:04.486960 systemd[1]: Stopped target network.target - Network. Mar 7 02:09:04.493845 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 02:09:04.493971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 02:09:04.501428 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 02:09:04.501485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 02:09:04.506851 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 02:09:04.506964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 02:09:04.512795 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 02:09:04.512845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 02:09:04.516498 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 02:09:04.523613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 02:09:04.529278 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 7 02:09:04.530150 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 02:09:04.531492 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 02:09:04.531668 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 02:09:04.536303 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 02:09:04.536494 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 02:09:04.540717 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 02:09:04.540842 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 02:09:04.550256 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 02:09:04.550304 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:09:04.573370 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 02:09:04.578387 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 02:09:04.578464 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:09:04.585092 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 02:09:04.585160 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:09:04.590727 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 02:09:04.590798 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 02:09:04.596957 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 02:09:04.597022 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:09:04.603070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:09:04.609121 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 02:09:04.609277 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 02:09:04.641084 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 02:09:04.651598 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:09:04.686583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 02:09:04.686659 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 02:09:04.688315 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 02:09:04.688362 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:09:04.697164 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 02:09:04.697278 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:09:04.707721 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 02:09:04.707805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 02:09:04.714366 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:09:04.714442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:09:04.722620 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 02:09:04.722682 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 02:09:04.732033 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 02:09:04.735124 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 02:09:04.735231 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:09:04.740963 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 02:09:04.741017 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:09:04.755096 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 02:09:04.755152 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:09:04.757052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:09:04.757100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:09:04.765238 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 02:09:04.765368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 02:09:04.770577 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 02:09:04.851164 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 02:09:04.770719 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 02:09:04.777028 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 02:09:04.799609 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 02:09:04.810052 systemd[1]: Switching root. Mar 7 02:09:04.869130 systemd-journald[194]: Journal stopped Mar 7 02:09:06.090487 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 02:09:06.090568 kernel: SELinux: policy capability open_perms=1 Mar 7 02:09:06.090587 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 02:09:06.090598 kernel: SELinux: policy capability always_check_network=0 Mar 7 02:09:06.090614 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 02:09:06.090628 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 02:09:06.090638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 02:09:06.090648 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 02:09:06.090658 kernel: audit: type=1403 audit(1772849345.032:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 02:09:06.090670 systemd[1]: Successfully loaded SELinux policy in 54.673ms. Mar 7 02:09:06.090692 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.172ms. Mar 7 02:09:06.090708 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:09:06.090719 systemd[1]: Detected virtualization kvm. Mar 7 02:09:06.090730 systemd[1]: Detected architecture x86-64. Mar 7 02:09:06.090743 systemd[1]: Detected first boot. Mar 7 02:09:06.090754 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:09:06.090764 zram_generator::config[1059]: No configuration found. Mar 7 02:09:06.090776 systemd[1]: Populated /etc with preset unit settings. Mar 7 02:09:06.090786 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 02:09:06.090797 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 02:09:06.090808 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 02:09:06.090819 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 02:09:06.090832 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 02:09:06.090849 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 02:09:06.090860 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 02:09:06.090870 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 02:09:06.090881 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 02:09:06.090896 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 02:09:06.090943 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 02:09:06.090953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:09:06.090965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:09:06.090979 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 02:09:06.090990 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 02:09:06.091004 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 02:09:06.091023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:09:06.091043 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 02:09:06.091057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:09:06.091068 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 02:09:06.091078 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 02:09:06.091089 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 02:09:06.091108 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 02:09:06.091119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:09:06.091130 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:09:06.091140 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:09:06.091153 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:09:06.091163 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 02:09:06.091221 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 02:09:06.091234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:09:06.091248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:09:06.091259 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:09:06.091270 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 02:09:06.091280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 02:09:06.091291 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 02:09:06.091302 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 02:09:06.091314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:09:06.091324 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 02:09:06.091337 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 02:09:06.091348 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 02:09:06.091359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 02:09:06.091370 systemd[1]: Reached target machines.target - Containers. Mar 7 02:09:06.091381 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 02:09:06.091391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:09:06.091402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:09:06.091413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 02:09:06.091424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:09:06.091439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:09:06.091449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:09:06.091460 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 02:09:06.091471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:09:06.091482 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 02:09:06.091493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 02:09:06.091503 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 02:09:06.091514 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 02:09:06.091527 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 02:09:06.091537 kernel: fuse: init (API version 7.39) Mar 7 02:09:06.091547 kernel: ACPI: bus type drm_connector registered Mar 7 02:09:06.091558 kernel: loop: module loaded Mar 7 02:09:06.091568 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:09:06.091579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:09:06.091590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 02:09:06.091601 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 02:09:06.091632 systemd-journald[1143]: Collecting audit messages is disabled. Mar 7 02:09:06.091660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:09:06.091673 systemd-journald[1143]: Journal started Mar 7 02:09:06.091691 systemd-journald[1143]: Runtime Journal (/run/log/journal/222863790ab442828d217efc8b6039b8) is 6.0M, max 48.3M, 42.2M free. Mar 7 02:09:05.648970 systemd[1]: Queued start job for default target multi-user.target. Mar 7 02:09:05.666023 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 02:09:05.666757 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 02:09:05.667129 systemd[1]: systemd-journald.service: Consumed 1.420s CPU time. Mar 7 02:09:06.100655 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 02:09:06.100707 systemd[1]: Stopped verity-setup.service. Mar 7 02:09:06.108295 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:09:06.113379 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:09:06.115947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 02:09:06.118827 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 02:09:06.121962 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 02:09:06.124772 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 02:09:06.127891 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 02:09:06.131051 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 02:09:06.134295 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 02:09:06.138270 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:09:06.142410 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 02:09:06.142689 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 02:09:06.146750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:09:06.147072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:09:06.150770 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:09:06.151129 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:09:06.154870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:09:06.155144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:09:06.159162 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 02:09:06.159525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 02:09:06.163139 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:09:06.163512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:09:06.167299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:09:06.171524 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:09:06.175754 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 02:09:06.196682 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 02:09:06.210416 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 02:09:06.216004 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 02:09:06.219405 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 02:09:06.219480 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:09:06.223838 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 02:09:06.234432 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 02:09:06.239303 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 02:09:06.242468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:09:06.244314 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 02:09:06.249169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 02:09:06.253330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:09:06.259382 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 02:09:06.263609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:09:06.270070 systemd-journald[1143]: Time spent on flushing to /var/log/journal/222863790ab442828d217efc8b6039b8 is 12.525ms for 980 entries. Mar 7 02:09:06.270070 systemd-journald[1143]: System Journal (/var/log/journal/222863790ab442828d217efc8b6039b8) is 8.0M, max 195.6M, 187.6M free. Mar 7 02:09:06.310523 systemd-journald[1143]: Received client request to flush runtime journal. Mar 7 02:09:06.270331 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:09:06.285424 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 02:09:06.293749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:09:06.300946 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:09:06.305957 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 02:09:06.310473 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 02:09:06.315541 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 02:09:06.319797 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 02:09:06.323764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 02:09:06.330255 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 02:09:06.330934 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 02:09:06.344465 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 02:09:06.350016 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 02:09:06.354840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:09:06.373631 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 02:09:06.375711 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 02:09:06.383490 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 7 02:09:06.383539 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 7 02:09:06.386592 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 02:09:06.388610 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 02:09:06.393673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:09:06.406543 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 02:09:06.419272 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 02:09:06.463541 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 02:09:06.479519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:09:06.484954 kernel: loop2: detected capacity change from 0 to 219192 Mar 7 02:09:06.507324 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 7 02:09:06.507375 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 7 02:09:06.515478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:09:06.529236 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 02:09:06.549266 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 02:09:06.570535 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 02:09:06.582446 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 02:09:06.583264 (sd-merge)[1203]: Merged extensions into '/usr'. Mar 7 02:09:06.592133 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 02:09:06.592148 systemd[1]: Reloading... Mar 7 02:09:06.657240 zram_generator::config[1226]: No configuration found. Mar 7 02:09:06.732714 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 02:09:06.786410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:09:06.828620 systemd[1]: Reloading finished in 235 ms. Mar 7 02:09:06.858806 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 02:09:06.862454 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 02:09:06.866065 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 02:09:06.888604 systemd[1]: Starting ensure-sysext.service... Mar 7 02:09:06.892897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:09:06.898396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:09:06.905856 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Mar 7 02:09:06.906053 systemd[1]: Reloading... Mar 7 02:09:06.920591 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 02:09:06.921333 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 02:09:06.923312 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 02:09:06.923847 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 7 02:09:06.924048 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 7 02:09:06.928981 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:09:06.929029 systemd-tmpfiles[1268]: Skipping /boot Mar 7 02:09:06.932715 systemd-udevd[1269]: Using default interface naming scheme 'v255'. Mar 7 02:09:06.951602 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:09:06.951788 systemd-tmpfiles[1268]: Skipping /boot Mar 7 02:09:06.978291 zram_generator::config[1301]: No configuration found. Mar 7 02:09:07.070278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1311) Mar 7 02:09:07.128258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 02:09:07.136255 kernel: ACPI: button: Power Button [PWRF] Mar 7 02:09:07.131735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:09:07.152479 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 7 02:09:07.152771 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 02:09:07.155291 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 02:09:07.171885 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 02:09:07.175341 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 02:09:07.191860 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 02:09:07.192026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:09:07.196504 systemd[1]: Reloading finished in 289 ms. Mar 7 02:09:07.216553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:09:07.230741 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 02:09:07.223994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:09:07.273674 systemd[1]: Finished ensure-sysext.service. Mar 7 02:09:07.282988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:09:07.284804 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:09:07.290288 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 02:09:07.293755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:09:07.296107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:09:07.305465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:09:07.319787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:09:07.327545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:09:07.331653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:09:07.336474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 02:09:07.343735 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 02:09:07.403567 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:09:07.409989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:09:07.418704 augenrules[1388]: No rules Mar 7 02:09:07.419431 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 02:09:07.433683 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 02:09:07.444218 kernel: kvm_amd: TSC scaling supported Mar 7 02:09:07.444269 kernel: kvm_amd: Nested Virtualization enabled Mar 7 02:09:07.444289 kernel: kvm_amd: Nested Paging enabled Mar 7 02:09:07.446125 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 02:09:07.446232 kernel: kvm_amd: PMU virtualization is disabled Mar 7 02:09:07.446092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:09:07.451425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:09:07.453378 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:09:07.457690 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 02:09:07.463888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:09:07.465815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:09:07.470331 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:09:07.470953 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:09:07.476370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:09:07.477794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:09:07.480343 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:09:07.480747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:09:07.481733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 02:09:07.482669 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 02:09:07.500511 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 02:09:07.504287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:09:07.504485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:09:07.535842 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 02:09:07.541340 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 02:09:07.542664 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 02:09:07.549311 kernel: EDAC MC: Ver: 3.0.0 Mar 7 02:09:07.563111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 02:09:07.575157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:09:07.584479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 02:09:07.588618 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 02:09:07.600418 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 02:09:07.618439 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:09:07.655125 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 02:09:07.659625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:09:07.665360 systemd-resolved[1390]: Positive Trust Anchors: Mar 7 02:09:07.665395 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:09:07.665422 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:09:07.670001 systemd-resolved[1390]: Defaulting to hostname 'linux'. Mar 7 02:09:07.670549 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 02:09:07.673876 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 02:09:07.677321 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:09:07.677506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:09:07.679063 systemd-networkd[1382]: lo: Link UP Mar 7 02:09:07.679069 systemd-networkd[1382]: lo: Gained carrier Mar 7 02:09:07.680746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:09:07.681724 systemd-networkd[1382]: Enumeration completed Mar 7 02:09:07.682951 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:09:07.682955 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:09:07.683980 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:09:07.684730 systemd-networkd[1382]: eth0: Link UP Mar 7 02:09:07.684735 systemd-networkd[1382]: eth0: Gained carrier Mar 7 02:09:07.684748 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:09:07.687564 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 02:09:07.691081 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 02:09:07.694633 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 02:09:07.698254 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 02:09:07.698286 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:09:07.701460 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 02:09:07.704987 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 02:09:07.708428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 02:09:07.712276 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:09:07.716253 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 02:09:07.721286 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:09:07.721610 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 02:09:07.722865 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Mar 7 02:09:08.578198 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 02:09:08.578266 systemd-timesyncd[1393]: Initial clock synchronization to Sat 2026-03-07 02:09:08.578083 UTC. Mar 7 02:09:08.578419 systemd-resolved[1390]: Clock change detected. Flushing caches. Mar 7 02:09:08.603267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 02:09:08.608191 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:09:08.613212 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 02:09:08.618333 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 02:09:08.623501 systemd[1]: Reached target network.target - Network. Mar 7 02:09:08.626343 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:09:08.629651 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:09:08.633188 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:09:08.633274 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:09:08.647132 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 02:09:08.651654 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 02:09:08.655429 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 02:09:08.659632 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 02:09:08.662682 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 02:09:08.665440 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 02:09:08.671805 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 02:09:08.679043 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 02:09:08.681799 jq[1434]: false Mar 7 02:09:08.686117 extend-filesystems[1435]: Found loop3 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found loop4 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found loop5 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found sr0 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda1 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda2 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda3 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found usr Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda4 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda6 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda7 Mar 7 02:09:08.688207 extend-filesystems[1435]: Found vda9 Mar 7 02:09:08.688207 extend-filesystems[1435]: Checking size of /dev/vda9 Mar 7 02:09:08.724504 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 02:09:08.702458 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 02:09:08.726470 extend-filesystems[1435]: Resized partition /dev/vda9 Mar 7 02:09:08.721034 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 02:09:08.730435 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Mar 7 02:09:08.735057 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 02:09:08.731463 dbus-daemon[1433]: [system] SELinux support is enabled Mar 7 02:09:08.742975 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 02:09:08.743492 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 02:09:08.745064 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 02:09:08.751883 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1309) Mar 7 02:09:08.757023 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 02:09:08.762757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 02:09:08.771744 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 02:09:08.782322 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 02:09:08.784941 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 02:09:08.794804 jq[1457]: true Mar 7 02:09:08.785403 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 02:09:08.797130 update_engine[1456]: I20260307 02:09:08.796499 1456 main.cc:92] Flatcar Update Engine starting Mar 7 02:09:08.797317 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 02:09:08.797317 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 02:09:08.797317 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 02:09:08.785645 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 02:09:08.818222 update_engine[1456]: I20260307 02:09:08.807993 1456 update_check_scheduler.cc:74] Next update check in 8m20s Mar 7 02:09:08.818254 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Mar 7 02:09:08.793631 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 02:09:08.793926 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 02:09:08.801894 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 02:09:08.802118 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 02:09:08.823489 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 02:09:08.823525 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 02:09:08.824012 systemd-logind[1453]: New seat seat0. Mar 7 02:09:08.827393 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 02:09:08.833375 jq[1462]: true Mar 7 02:09:08.842879 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 02:09:08.854359 tar[1459]: linux-amd64/LICENSE Mar 7 02:09:08.854710 tar[1459]: linux-amd64/helm Mar 7 02:09:08.867320 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 02:09:08.870011 systemd[1]: Started update-engine.service - Update Engine. Mar 7 02:09:08.877896 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 02:09:08.878067 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 02:09:08.883037 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 02:09:08.883143 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 02:09:08.896883 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 02:09:08.909358 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Mar 7 02:09:08.911984 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 02:09:08.918036 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 02:09:08.953411 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 02:09:08.971652 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 02:09:09.001356 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 02:09:09.014235 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 02:09:09.034283 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 02:09:09.034648 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 02:09:09.044310 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 02:09:09.075857 containerd[1463]: time="2026-03-07T02:09:09.074356810Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 02:09:09.078681 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 02:09:09.088427 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 02:09:09.096246 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 02:09:09.101256 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 02:09:09.105718 containerd[1463]: time="2026-03-07T02:09:09.105616923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.109015 containerd[1463]: time="2026-03-07T02:09:09.108909515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:09:09.109015 containerd[1463]: time="2026-03-07T02:09:09.108990076Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 02:09:09.109078 containerd[1463]: time="2026-03-07T02:09:09.109019781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 02:09:09.109407 containerd[1463]: time="2026-03-07T02:09:09.109309231Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 02:09:09.109407 containerd[1463]: time="2026-03-07T02:09:09.109377188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.109606 containerd[1463]: time="2026-03-07T02:09:09.109486612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:09:09.109606 containerd[1463]: time="2026-03-07T02:09:09.109574697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.110093 containerd[1463]: time="2026-03-07T02:09:09.110005190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:09:09.110093 containerd[1463]: time="2026-03-07T02:09:09.110065904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.110138 containerd[1463]: time="2026-03-07T02:09:09.110090700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:09:09.110138 containerd[1463]: time="2026-03-07T02:09:09.110110898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.110342 containerd[1463]: time="2026-03-07T02:09:09.110239378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.110803 containerd[1463]: time="2026-03-07T02:09:09.110679119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:09:09.111086 containerd[1463]: time="2026-03-07T02:09:09.110978197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:09:09.111086 containerd[1463]: time="2026-03-07T02:09:09.111036206Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 02:09:09.111251 containerd[1463]: time="2026-03-07T02:09:09.111197887Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 02:09:09.111366 containerd[1463]: time="2026-03-07T02:09:09.111291512Z" level=info msg="metadata content store policy set" policy=shared Mar 7 02:09:09.118243 containerd[1463]: time="2026-03-07T02:09:09.118137556Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 02:09:09.118285 containerd[1463]: time="2026-03-07T02:09:09.118240038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 02:09:09.118285 containerd[1463]: time="2026-03-07T02:09:09.118268010Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 02:09:09.118328 containerd[1463]: time="2026-03-07T02:09:09.118292496Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 02:09:09.118328 containerd[1463]: time="2026-03-07T02:09:09.118317442Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 02:09:09.118595 containerd[1463]: time="2026-03-07T02:09:09.118515151Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 02:09:09.119113 containerd[1463]: time="2026-03-07T02:09:09.119075066Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 02:09:09.119467 containerd[1463]: time="2026-03-07T02:09:09.119382720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 02:09:09.119467 containerd[1463]: time="2026-03-07T02:09:09.119454565Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 02:09:09.119587 containerd[1463]: time="2026-03-07T02:09:09.119478790Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 02:09:09.119587 containerd[1463]: time="2026-03-07T02:09:09.119502645Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119587 containerd[1463]: time="2026-03-07T02:09:09.119523564Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119691 containerd[1463]: time="2026-03-07T02:09:09.119542479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119691 containerd[1463]: time="2026-03-07T02:09:09.119616367Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119691 containerd[1463]: time="2026-03-07T02:09:09.119639741Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119691 containerd[1463]: time="2026-03-07T02:09:09.119659668Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119691 containerd[1463]: time="2026-03-07T02:09:09.119678302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119695565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119725381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119745057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119762269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119788529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119808105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.119907 containerd[1463]: time="2026-03-07T02:09:09.119908673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.119931004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.119953316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.119982039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120002267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120020471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120041871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120061738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120085312Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120114898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120135 containerd[1463]: time="2026-03-07T02:09:09.120135325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120154582Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120238088Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120264467Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120282120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120303891Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120320942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120341350Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120356549Z" level=info msg="NRI interface is disabled by configuration." Mar 7 02:09:09.120407 containerd[1463]: time="2026-03-07T02:09:09.120374532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 02:09:09.122109 containerd[1463]: time="2026-03-07T02:09:09.120921444Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 02:09:09.122109 containerd[1463]: time="2026-03-07T02:09:09.121027471Z" level=info msg="Connect containerd service" Mar 7 02:09:09.122109 containerd[1463]: time="2026-03-07T02:09:09.121077255Z" level=info msg="using legacy CRI server" Mar 7 02:09:09.122109 containerd[1463]: time="2026-03-07T02:09:09.121090429Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 02:09:09.122109 containerd[1463]: time="2026-03-07T02:09:09.121211374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 02:09:09.122455 containerd[1463]: time="2026-03-07T02:09:09.122164434Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:09:09.122921 containerd[1463]: time="2026-03-07T02:09:09.122621339Z" level=info msg="Start subscribing containerd event" Mar 7 02:09:09.122921 containerd[1463]: time="2026-03-07T02:09:09.122753395Z" level=info msg="Start recovering state" Mar 7 02:09:09.122921 containerd[1463]: time="2026-03-07T02:09:09.122646885Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 02:09:09.123027 containerd[1463]: time="2026-03-07T02:09:09.122940204Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 02:09:09.123110 containerd[1463]: time="2026-03-07T02:09:09.123089153Z" level=info msg="Start event monitor" Mar 7 02:09:09.123191 containerd[1463]: time="2026-03-07T02:09:09.123173630Z" level=info msg="Start snapshots syncer" Mar 7 02:09:09.123256 containerd[1463]: time="2026-03-07T02:09:09.123240304Z" level=info msg="Start cni network conf syncer for default" Mar 7 02:09:09.123315 containerd[1463]: time="2026-03-07T02:09:09.123300227Z" level=info msg="Start streaming server" Mar 7 02:09:09.123611 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 02:09:09.125712 containerd[1463]: time="2026-03-07T02:09:09.125641506Z" level=info msg="containerd successfully booted in 0.052252s" Mar 7 02:09:09.305788 tar[1459]: linux-amd64/README.md Mar 7 02:09:09.321998 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 02:09:10.409028 systemd-networkd[1382]: eth0: Gained IPv6LL Mar 7 02:09:10.412939 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 02:09:10.417455 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 02:09:10.434129 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 02:09:10.439973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:10.446020 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 02:09:10.468718 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 02:09:10.469158 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 02:09:10.473125 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 02:09:10.477539 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 02:09:11.204612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:11.208155 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 02:09:11.209969 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:09:11.211273 systemd[1]: Startup finished in 1.311s (kernel) + 6.264s (initrd) + 5.379s (userspace) = 12.955s. Mar 7 02:09:11.631207 kubelet[1547]: E0307 02:09:11.631027 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:09:11.634924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:09:11.635202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:09:13.261983 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 02:09:13.271251 systemd[1]: Started sshd@0-10.0.0.4:22-10.0.0.1:34798.service - OpenSSH per-connection server daemon (10.0.0.1:34798). Mar 7 02:09:13.319420 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 34798 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:13.321708 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:13.336359 systemd-logind[1453]: New session 1 of user core. Mar 7 02:09:13.338538 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 02:09:13.356469 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 02:09:13.372224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 02:09:13.390479 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 02:09:13.394401 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 02:09:13.507964 systemd[1564]: Queued start job for default target default.target. Mar 7 02:09:13.519932 systemd[1564]: Created slice app.slice - User Application Slice. Mar 7 02:09:13.520002 systemd[1564]: Reached target paths.target - Paths. Mar 7 02:09:13.520028 systemd[1564]: Reached target timers.target - Timers. Mar 7 02:09:13.522347 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 02:09:13.537421 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 02:09:13.537625 systemd[1564]: Reached target sockets.target - Sockets. Mar 7 02:09:13.537660 systemd[1564]: Reached target basic.target - Basic System. Mar 7 02:09:13.537702 systemd[1564]: Reached target default.target - Main User Target. Mar 7 02:09:13.537753 systemd[1564]: Startup finished in 135ms. Mar 7 02:09:13.538082 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 02:09:13.540360 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 02:09:13.602915 systemd[1]: Started sshd@1-10.0.0.4:22-10.0.0.1:34806.service - OpenSSH per-connection server daemon (10.0.0.1:34806). Mar 7 02:09:13.649859 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 34806 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:13.652222 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:13.659062 systemd-logind[1453]: New session 2 of user core. Mar 7 02:09:13.669070 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 02:09:13.728903 sshd[1575]: pam_unix(sshd:session): session closed for user core Mar 7 02:09:13.736795 systemd[1]: sshd@1-10.0.0.4:22-10.0.0.1:34806.service: Deactivated successfully. Mar 7 02:09:13.739653 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 02:09:13.741512 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Mar 7 02:09:13.758293 systemd[1]: Started sshd@2-10.0.0.4:22-10.0.0.1:34816.service - OpenSSH per-connection server daemon (10.0.0.1:34816). Mar 7 02:09:13.759540 systemd-logind[1453]: Removed session 2. Mar 7 02:09:13.791729 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:13.793798 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:13.799321 systemd-logind[1453]: New session 3 of user core. Mar 7 02:09:13.809065 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 02:09:13.861157 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 7 02:09:13.870513 systemd[1]: sshd@2-10.0.0.4:22-10.0.0.1:34816.service: Deactivated successfully. Mar 7 02:09:13.872982 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 02:09:13.875269 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Mar 7 02:09:13.877048 systemd[1]: Started sshd@3-10.0.0.4:22-10.0.0.1:34826.service - OpenSSH per-connection server daemon (10.0.0.1:34826). Mar 7 02:09:13.878252 systemd-logind[1453]: Removed session 3. Mar 7 02:09:13.919642 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 34826 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:13.921677 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:13.927191 systemd-logind[1453]: New session 4 of user core. Mar 7 02:09:13.940990 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 02:09:13.998072 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 7 02:09:14.005416 systemd[1]: sshd@3-10.0.0.4:22-10.0.0.1:34826.service: Deactivated successfully. Mar 7 02:09:14.007483 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 02:09:14.009466 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Mar 7 02:09:14.016179 systemd[1]: Started sshd@4-10.0.0.4:22-10.0.0.1:34832.service - OpenSSH per-connection server daemon (10.0.0.1:34832). Mar 7 02:09:14.017370 systemd-logind[1453]: Removed session 4. Mar 7 02:09:14.054536 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 34832 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:14.056319 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:14.062236 systemd-logind[1453]: New session 5 of user core. Mar 7 02:09:14.077030 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 02:09:14.154687 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 02:09:14.155144 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:09:14.180120 sudo[1599]: pam_unix(sudo:session): session closed for user root Mar 7 02:09:14.182632 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 7 02:09:14.195726 systemd[1]: sshd@4-10.0.0.4:22-10.0.0.1:34832.service: Deactivated successfully. Mar 7 02:09:14.197389 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 02:09:14.199374 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Mar 7 02:09:14.208210 systemd[1]: Started sshd@5-10.0.0.4:22-10.0.0.1:34834.service - OpenSSH per-connection server daemon (10.0.0.1:34834). Mar 7 02:09:14.209234 systemd-logind[1453]: Removed session 5. Mar 7 02:09:14.246711 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 34834 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:14.248543 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:14.254354 systemd-logind[1453]: New session 6 of user core. Mar 7 02:09:14.269081 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 02:09:14.327021 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 02:09:14.327394 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:09:14.332669 sudo[1609]: pam_unix(sudo:session): session closed for user root Mar 7 02:09:14.340778 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 02:09:14.341308 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:09:14.363258 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 02:09:14.365529 auditctl[1612]: No rules Mar 7 02:09:14.366999 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 02:09:14.367378 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 02:09:14.369457 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:09:14.408196 augenrules[1630]: No rules Mar 7 02:09:14.409983 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:09:14.411136 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 7 02:09:14.413340 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 7 02:09:14.430653 systemd[1]: sshd@5-10.0.0.4:22-10.0.0.1:34834.service: Deactivated successfully. Mar 7 02:09:14.432425 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 02:09:14.434010 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Mar 7 02:09:14.445227 systemd[1]: Started sshd@6-10.0.0.4:22-10.0.0.1:34848.service - OpenSSH per-connection server daemon (10.0.0.1:34848). Mar 7 02:09:14.446381 systemd-logind[1453]: Removed session 6. Mar 7 02:09:14.481078 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 34848 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:09:14.483085 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:09:14.488145 systemd-logind[1453]: New session 7 of user core. Mar 7 02:09:14.495028 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 02:09:14.550224 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 02:09:14.550620 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:09:14.845113 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 02:09:14.845309 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 02:09:15.127300 dockerd[1660]: time="2026-03-07T02:09:15.127126751Z" level=info msg="Starting up" Mar 7 02:09:15.377175 dockerd[1660]: time="2026-03-07T02:09:15.377054277Z" level=info msg="Loading containers: start." Mar 7 02:09:15.535867 kernel: Initializing XFRM netlink socket Mar 7 02:09:15.638866 systemd-networkd[1382]: docker0: Link UP Mar 7 02:09:15.675361 dockerd[1660]: time="2026-03-07T02:09:15.675291467Z" level=info msg="Loading containers: done." Mar 7 02:09:15.691233 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1370351048-merged.mount: Deactivated successfully. Mar 7 02:09:15.694097 dockerd[1660]: time="2026-03-07T02:09:15.694033543Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 02:09:15.694239 dockerd[1660]: time="2026-03-07T02:09:15.694195155Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 02:09:15.694351 dockerd[1660]: time="2026-03-07T02:09:15.694315731Z" level=info msg="Daemon has completed initialization" Mar 7 02:09:15.734555 dockerd[1660]: time="2026-03-07T02:09:15.734467244Z" level=info msg="API listen on /run/docker.sock" Mar 7 02:09:15.734713 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 02:09:16.194762 containerd[1463]: time="2026-03-07T02:09:16.194700057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 02:09:16.716984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816972953.mount: Deactivated successfully. Mar 7 02:09:18.016751 containerd[1463]: time="2026-03-07T02:09:18.016645405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:18.017932 containerd[1463]: time="2026-03-07T02:09:18.017806849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 7 02:09:18.028467 containerd[1463]: time="2026-03-07T02:09:18.028338580Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:18.032114 containerd[1463]: time="2026-03-07T02:09:18.032008703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:18.033975 containerd[1463]: time="2026-03-07T02:09:18.033872748Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.839045503s" Mar 7 02:09:18.033975 containerd[1463]: time="2026-03-07T02:09:18.033946876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 02:09:18.034936 containerd[1463]: time="2026-03-07T02:09:18.034888143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 02:09:19.291315 containerd[1463]: time="2026-03-07T02:09:19.291192446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:19.292390 containerd[1463]: time="2026-03-07T02:09:19.292341943Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 7 02:09:19.293738 containerd[1463]: time="2026-03-07T02:09:19.293662067Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:19.297793 containerd[1463]: time="2026-03-07T02:09:19.297712595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:19.299112 containerd[1463]: time="2026-03-07T02:09:19.298960488Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.264032712s" Mar 7 02:09:19.299112 containerd[1463]: time="2026-03-07T02:09:19.299027905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 02:09:19.300252 containerd[1463]: time="2026-03-07T02:09:19.299722754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 02:09:20.214615 containerd[1463]: time="2026-03-07T02:09:20.214493112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:20.215564 containerd[1463]: time="2026-03-07T02:09:20.215491256Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 7 02:09:20.217088 containerd[1463]: time="2026-03-07T02:09:20.217035148Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:20.220218 containerd[1463]: time="2026-03-07T02:09:20.220153860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:20.221392 containerd[1463]: time="2026-03-07T02:09:20.221320516Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 921.557166ms" Mar 7 02:09:20.221392 containerd[1463]: time="2026-03-07T02:09:20.221382441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 02:09:20.221980 containerd[1463]: time="2026-03-07T02:09:20.221903935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 02:09:21.197760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034693362.mount: Deactivated successfully. Mar 7 02:09:21.539004 containerd[1463]: time="2026-03-07T02:09:21.538882001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:21.539962 containerd[1463]: time="2026-03-07T02:09:21.539919048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 7 02:09:21.541033 containerd[1463]: time="2026-03-07T02:09:21.540998272Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:21.544066 containerd[1463]: time="2026-03-07T02:09:21.544010565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:21.544892 containerd[1463]: time="2026-03-07T02:09:21.544767148Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.322775939s" Mar 7 02:09:21.544892 containerd[1463]: time="2026-03-07T02:09:21.544861143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 02:09:21.545614 containerd[1463]: time="2026-03-07T02:09:21.545556571Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 02:09:21.885389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 02:09:21.892132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:22.057174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:22.062079 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:09:22.090523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617516205.mount: Deactivated successfully. Mar 7 02:09:22.124424 kubelet[1892]: E0307 02:09:22.124288 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:09:22.129707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:09:22.129968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:09:22.914678 containerd[1463]: time="2026-03-07T02:09:22.914555590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:22.915719 containerd[1463]: time="2026-03-07T02:09:22.915642041Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 7 02:09:22.917060 containerd[1463]: time="2026-03-07T02:09:22.917021404Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:22.921090 containerd[1463]: time="2026-03-07T02:09:22.921016007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:22.922771 containerd[1463]: time="2026-03-07T02:09:22.922725526Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.377135492s" Mar 7 02:09:22.922861 containerd[1463]: time="2026-03-07T02:09:22.922772193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 02:09:22.924115 containerd[1463]: time="2026-03-07T02:09:22.924065274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 02:09:23.298766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700383381.mount: Deactivated successfully. Mar 7 02:09:23.309260 containerd[1463]: time="2026-03-07T02:09:23.309166870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:23.312693 containerd[1463]: time="2026-03-07T02:09:23.312577648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 02:09:23.314138 containerd[1463]: time="2026-03-07T02:09:23.314034576Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:23.317253 containerd[1463]: time="2026-03-07T02:09:23.317177666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:23.317902 containerd[1463]: time="2026-03-07T02:09:23.317807124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 393.702256ms" Mar 7 02:09:23.317902 containerd[1463]: time="2026-03-07T02:09:23.317893516Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 02:09:23.318490 containerd[1463]: time="2026-03-07T02:09:23.318441879Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 02:09:23.774523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302566463.mount: Deactivated successfully. Mar 7 02:09:24.682341 containerd[1463]: time="2026-03-07T02:09:24.682246721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:24.683982 containerd[1463]: time="2026-03-07T02:09:24.683253800Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 7 02:09:24.686313 containerd[1463]: time="2026-03-07T02:09:24.686092065Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:24.690174 containerd[1463]: time="2026-03-07T02:09:24.690138139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:24.692446 containerd[1463]: time="2026-03-07T02:09:24.692371247Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.373883913s" Mar 7 02:09:24.692446 containerd[1463]: time="2026-03-07T02:09:24.692416962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 02:09:27.475281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:27.486246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:27.523088 systemd[1]: Reloading requested from client PID 2047 ('systemctl') (unit session-7.scope)... Mar 7 02:09:27.523131 systemd[1]: Reloading... Mar 7 02:09:27.618914 zram_generator::config[2086]: No configuration found. Mar 7 02:09:27.745478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:09:27.817419 systemd[1]: Reloading finished in 293 ms. Mar 7 02:09:27.880410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:27.884808 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:27.887562 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:09:27.887998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:27.897397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:28.073368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:28.080981 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:09:28.137427 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 02:09:28.137427 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:09:28.137950 kubelet[2136]: I0307 02:09:28.137459 2136 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 02:09:28.448219 kubelet[2136]: I0307 02:09:28.448089 2136 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 02:09:28.448219 kubelet[2136]: I0307 02:09:28.448129 2136 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:09:28.448219 kubelet[2136]: I0307 02:09:28.448162 2136 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:09:28.448219 kubelet[2136]: I0307 02:09:28.448173 2136 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:09:28.448364 kubelet[2136]: I0307 02:09:28.448341 2136 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 02:09:28.483788 kubelet[2136]: E0307 02:09:28.483702 2136 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:09:28.484174 kubelet[2136]: I0307 02:09:28.484126 2136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:09:28.489645 kubelet[2136]: E0307 02:09:28.489541 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:09:28.489739 kubelet[2136]: I0307 02:09:28.489677 2136 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:09:28.496662 kubelet[2136]: I0307 02:09:28.496572 2136 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:09:28.498442 kubelet[2136]: I0307 02:09:28.498321 2136 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:09:28.498794 kubelet[2136]: I0307 02:09:28.498402 2136 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:09:28.498794 kubelet[2136]: I0307 02:09:28.498753 2136 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 02:09:28.498794 kubelet[2136]: I0307 02:09:28.498763 2136 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 02:09:28.499043 kubelet[2136]: I0307 02:09:28.498908 2136 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:09:28.501306 kubelet[2136]: I0307 02:09:28.501284 2136 state_mem.go:36] "Initialized new in-memory state store" Mar 7 02:09:28.501538 kubelet[2136]: I0307 02:09:28.501485 2136 kubelet.go:475] "Attempting to sync node with API server" Mar 7 02:09:28.501538 kubelet[2136]: I0307 02:09:28.501512 2136 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:09:28.501538 kubelet[2136]: I0307 02:09:28.501531 2136 kubelet.go:387] "Adding apiserver pod source" Mar 7 02:09:28.501671 kubelet[2136]: I0307 02:09:28.501544 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:09:28.502322 kubelet[2136]: E0307 02:09:28.502285 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 02:09:28.502378 kubelet[2136]: E0307 02:09:28.502339 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 02:09:28.503501 kubelet[2136]: I0307 02:09:28.503473 2136 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:09:28.503976 kubelet[2136]: I0307 02:09:28.503950 2136 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:09:28.503976 kubelet[2136]: I0307 02:09:28.503985 2136 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:09:28.504071 kubelet[2136]: W0307 02:09:28.504031 2136 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 02:09:28.507386 kubelet[2136]: I0307 02:09:28.507338 2136 server.go:1262] "Started kubelet" Mar 7 02:09:28.507576 kubelet[2136]: I0307 02:09:28.507456 2136 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:09:28.507576 kubelet[2136]: I0307 02:09:28.507553 2136 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:09:28.508226 kubelet[2136]: I0307 02:09:28.508157 2136 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:09:28.508284 kubelet[2136]: I0307 02:09:28.508236 2136 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:09:28.509118 kubelet[2136]: I0307 02:09:28.508750 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 02:09:28.510268 kubelet[2136]: I0307 02:09:28.510158 2136 server.go:310] "Adding debug handlers to kubelet server" Mar 7 02:09:28.511648 kubelet[2136]: I0307 02:09:28.510783 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:09:28.513184 kubelet[2136]: E0307 02:09:28.511471 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6d1b922781c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 02:09:28.507318724 +0000 UTC m=+0.420847778,LastTimestamp:2026-03-07 02:09:28.507318724 +0000 UTC m=+0.420847778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 02:09:28.513949 kubelet[2136]: E0307 02:09:28.513890 2136 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:09:28.514025 kubelet[2136]: I0307 02:09:28.513958 2136 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 02:09:28.514174 kubelet[2136]: I0307 02:09:28.514081 2136 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:09:28.514174 kubelet[2136]: I0307 02:09:28.514133 2136 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:09:28.514250 kubelet[2136]: E0307 02:09:28.514221 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="200ms" Mar 7 02:09:28.514477 kubelet[2136]: E0307 02:09:28.514389 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 02:09:28.515327 kubelet[2136]: I0307 02:09:28.515287 2136 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:09:28.515398 kubelet[2136]: I0307 02:09:28.515382 2136 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:09:28.515991 kubelet[2136]: E0307 02:09:28.515952 2136 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:09:28.516471 kubelet[2136]: I0307 02:09:28.516444 2136 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:09:28.534020 kubelet[2136]: I0307 02:09:28.533592 2136 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 02:09:28.534020 kubelet[2136]: I0307 02:09:28.533645 2136 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 02:09:28.534020 kubelet[2136]: I0307 02:09:28.533662 2136 state_mem.go:36] "Initialized new in-memory state store" Mar 7 02:09:28.536562 kubelet[2136]: I0307 02:09:28.536512 2136 policy_none.go:49] "None policy: Start" Mar 7 02:09:28.536562 kubelet[2136]: I0307 02:09:28.536551 2136 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:09:28.536562 kubelet[2136]: I0307 02:09:28.536563 2136 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:09:28.537974 kubelet[2136]: I0307 02:09:28.537941 2136 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:09:28.539521 kubelet[2136]: I0307 02:09:28.538919 2136 policy_none.go:47] "Start" Mar 7 02:09:28.539883 kubelet[2136]: I0307 02:09:28.539790 2136 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:09:28.539925 kubelet[2136]: I0307 02:09:28.539890 2136 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 02:09:28.539925 kubelet[2136]: I0307 02:09:28.539911 2136 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 02:09:28.539972 kubelet[2136]: E0307 02:09:28.539948 2136 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:09:28.540531 kubelet[2136]: E0307 02:09:28.540477 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 02:09:28.546148 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 02:09:28.561003 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 02:09:28.564585 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 02:09:28.571890 kubelet[2136]: E0307 02:09:28.571809 2136 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:09:28.572285 kubelet[2136]: I0307 02:09:28.572030 2136 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 02:09:28.572285 kubelet[2136]: I0307 02:09:28.572075 2136 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:09:28.572285 kubelet[2136]: I0307 02:09:28.572257 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 02:09:28.573374 kubelet[2136]: E0307 02:09:28.573336 2136 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:09:28.573374 kubelet[2136]: E0307 02:09:28.573362 2136 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:09:28.653083 systemd[1]: Created slice kubepods-burstable-podabd37d360f2aaf839ebfe43151f3c676.slice - libcontainer container kubepods-burstable-podabd37d360f2aaf839ebfe43151f3c676.slice. Mar 7 02:09:28.673440 kubelet[2136]: E0307 02:09:28.673311 2136 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:09:28.674768 kubelet[2136]: I0307 02:09:28.674528 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 02:09:28.675169 kubelet[2136]: E0307 02:09:28.675088 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Mar 7 02:09:28.677687 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 7 02:09:28.690988 kubelet[2136]: E0307 02:09:28.690927 2136 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:09:28.693982 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 7 02:09:28.696029 kubelet[2136]: E0307 02:09:28.695980 2136 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:09:28.715596 kubelet[2136]: E0307 02:09:28.715468 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="400ms" Mar 7 02:09:28.816313 kubelet[2136]: I0307 02:09:28.816240 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:28.816313 kubelet[2136]: I0307 02:09:28.816287 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:28.816463 kubelet[2136]: I0307 02:09:28.816331 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:28.816463 kubelet[2136]: I0307 02:09:28.816351 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:28.816463 kubelet[2136]: I0307 02:09:28.816366 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:28.816463 kubelet[2136]: I0307 02:09:28.816378 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abd37d360f2aaf839ebfe43151f3c676-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abd37d360f2aaf839ebfe43151f3c676\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:28.816463 kubelet[2136]: I0307 02:09:28.816417 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abd37d360f2aaf839ebfe43151f3c676-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abd37d360f2aaf839ebfe43151f3c676\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:28.816560 kubelet[2136]: I0307 02:09:28.816431 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abd37d360f2aaf839ebfe43151f3c676-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abd37d360f2aaf839ebfe43151f3c676\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:28.816560 kubelet[2136]: I0307 02:09:28.816454 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:28.876865 kubelet[2136]: I0307 02:09:28.876752 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 02:09:28.877159 kubelet[2136]: E0307 02:09:28.877124 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Mar 7 02:09:28.977935 kubelet[2136]: E0307 02:09:28.977762 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:28.979240 containerd[1463]: time="2026-03-07T02:09:28.979154115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abd37d360f2aaf839ebfe43151f3c676,Namespace:kube-system,Attempt:0,}" Mar 7 02:09:28.994678 kubelet[2136]: E0307 02:09:28.994593 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:28.995414 containerd[1463]: time="2026-03-07T02:09:28.995179251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 7 02:09:28.999278 kubelet[2136]: E0307 02:09:28.999229 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:28.999722 containerd[1463]: time="2026-03-07T02:09:28.999684332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 7 02:09:29.116517 kubelet[2136]: E0307 02:09:29.116465 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="800ms" Mar 7 02:09:29.279369 kubelet[2136]: I0307 02:09:29.279171 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 02:09:29.279943 kubelet[2136]: E0307 02:09:29.279666 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Mar 7 02:09:29.316792 kubelet[2136]: E0307 02:09:29.316710 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 02:09:29.382780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882801909.mount: Deactivated successfully. Mar 7 02:09:29.391511 containerd[1463]: time="2026-03-07T02:09:29.391429317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:09:29.394553 containerd[1463]: time="2026-03-07T02:09:29.394493947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 02:09:29.396111 containerd[1463]: time="2026-03-07T02:09:29.396062209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:09:29.397301 containerd[1463]: time="2026-03-07T02:09:29.397266493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:09:29.398194 containerd[1463]: time="2026-03-07T02:09:29.398137063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:09:29.399231 containerd[1463]: time="2026-03-07T02:09:29.399188771Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:09:29.400284 containerd[1463]: time="2026-03-07T02:09:29.400252193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:09:29.402087 containerd[1463]: time="2026-03-07T02:09:29.402027766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:09:29.403638 containerd[1463]: time="2026-03-07T02:09:29.403585400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 408.346927ms" Mar 7 02:09:29.405360 containerd[1463]: time="2026-03-07T02:09:29.405292579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 426.053806ms" Mar 7 02:09:29.412877 containerd[1463]: time="2026-03-07T02:09:29.412755500Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 413.007048ms" Mar 7 02:09:29.461630 kubelet[2136]: E0307 02:09:29.461541 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 02:09:29.519246 containerd[1463]: time="2026-03-07T02:09:29.519033499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:29.519246 containerd[1463]: time="2026-03-07T02:09:29.519109390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:29.519246 containerd[1463]: time="2026-03-07T02:09:29.519128196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:29.520078 containerd[1463]: time="2026-03-07T02:09:29.519891912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:29.524046 containerd[1463]: time="2026-03-07T02:09:29.523877933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:29.524172 containerd[1463]: time="2026-03-07T02:09:29.524015170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:29.525168 containerd[1463]: time="2026-03-07T02:09:29.524165340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:29.526713 containerd[1463]: time="2026-03-07T02:09:29.525139208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:29.527799 containerd[1463]: time="2026-03-07T02:09:29.527665590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:29.527921 containerd[1463]: time="2026-03-07T02:09:29.527749426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:29.527921 containerd[1463]: time="2026-03-07T02:09:29.527769233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:29.528896 containerd[1463]: time="2026-03-07T02:09:29.527916819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:29.549105 systemd[1]: Started cri-containerd-8bad4e44036aac8c1dd217f941611785c93208b75e33a116bc914ff385c13028.scope - libcontainer container 8bad4e44036aac8c1dd217f941611785c93208b75e33a116bc914ff385c13028. Mar 7 02:09:29.555662 systemd[1]: Started cri-containerd-a6430a3d9360c417181c2bd5fe7efc8162887d825a1cfa257ad6f03429580094.scope - libcontainer container a6430a3d9360c417181c2bd5fe7efc8162887d825a1cfa257ad6f03429580094. Mar 7 02:09:29.558320 systemd[1]: Started cri-containerd-eb89a9fca2003373950274e21b85565d32950f3b4b9081b3d903c6ccbfda328a.scope - libcontainer container eb89a9fca2003373950274e21b85565d32950f3b4b9081b3d903c6ccbfda328a. Mar 7 02:09:29.602994 containerd[1463]: time="2026-03-07T02:09:29.602879547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bad4e44036aac8c1dd217f941611785c93208b75e33a116bc914ff385c13028\"" Mar 7 02:09:29.604190 kubelet[2136]: E0307 02:09:29.604133 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:29.616043 containerd[1463]: time="2026-03-07T02:09:29.615963241Z" level=info msg="CreateContainer within sandbox \"8bad4e44036aac8c1dd217f941611785c93208b75e33a116bc914ff385c13028\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 02:09:29.619108 containerd[1463]: time="2026-03-07T02:09:29.619021070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb89a9fca2003373950274e21b85565d32950f3b4b9081b3d903c6ccbfda328a\"" Mar 7 02:09:29.621432 kubelet[2136]: E0307 02:09:29.621363 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:29.628084 containerd[1463]: time="2026-03-07T02:09:29.627992115Z" level=info msg="CreateContainer within sandbox \"eb89a9fca2003373950274e21b85565d32950f3b4b9081b3d903c6ccbfda328a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 02:09:29.628712 containerd[1463]: time="2026-03-07T02:09:29.628591389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abd37d360f2aaf839ebfe43151f3c676,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6430a3d9360c417181c2bd5fe7efc8162887d825a1cfa257ad6f03429580094\"" Mar 7 02:09:29.629684 kubelet[2136]: E0307 02:09:29.629537 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:29.635115 containerd[1463]: time="2026-03-07T02:09:29.634969727Z" level=info msg="CreateContainer within sandbox \"a6430a3d9360c417181c2bd5fe7efc8162887d825a1cfa257ad6f03429580094\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 02:09:29.639977 containerd[1463]: time="2026-03-07T02:09:29.639915476Z" level=info msg="CreateContainer within sandbox \"8bad4e44036aac8c1dd217f941611785c93208b75e33a116bc914ff385c13028\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f57818ffb484fd53cd73f9c2fb804a1742e43b07963e41c61af0e25366f7fef\"" Mar 7 02:09:29.641274 containerd[1463]: time="2026-03-07T02:09:29.641214315Z" level=info msg="StartContainer for \"3f57818ffb484fd53cd73f9c2fb804a1742e43b07963e41c61af0e25366f7fef\"" Mar 7 02:09:29.650248 containerd[1463]: time="2026-03-07T02:09:29.650214546Z" level=info msg="CreateContainer within sandbox \"eb89a9fca2003373950274e21b85565d32950f3b4b9081b3d903c6ccbfda328a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92cfdd9fc2e5049b0e8354471c78a6465c65dd22cca0ad6dd37ee52171eb7a25\"" Mar 7 02:09:29.652476 containerd[1463]: time="2026-03-07T02:09:29.651371086Z" level=info msg="StartContainer for \"92cfdd9fc2e5049b0e8354471c78a6465c65dd22cca0ad6dd37ee52171eb7a25\"" Mar 7 02:09:29.661193 containerd[1463]: time="2026-03-07T02:09:29.661090999Z" level=info msg="CreateContainer within sandbox \"a6430a3d9360c417181c2bd5fe7efc8162887d825a1cfa257ad6f03429580094\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4466c959f215316cf228b41ddc15e30b72f8f185b2f863c6f656c6e24ee026a\"" Mar 7 02:09:29.662161 containerd[1463]: time="2026-03-07T02:09:29.662131947Z" level=info msg="StartContainer for \"b4466c959f215316cf228b41ddc15e30b72f8f185b2f863c6f656c6e24ee026a\"" Mar 7 02:09:29.676026 systemd[1]: Started cri-containerd-3f57818ffb484fd53cd73f9c2fb804a1742e43b07963e41c61af0e25366f7fef.scope - libcontainer container 3f57818ffb484fd53cd73f9c2fb804a1742e43b07963e41c61af0e25366f7fef. Mar 7 02:09:29.698062 systemd[1]: Started cri-containerd-92cfdd9fc2e5049b0e8354471c78a6465c65dd22cca0ad6dd37ee52171eb7a25.scope - libcontainer container 92cfdd9fc2e5049b0e8354471c78a6465c65dd22cca0ad6dd37ee52171eb7a25. Mar 7 02:09:29.702255 systemd[1]: Started cri-containerd-b4466c959f215316cf228b41ddc15e30b72f8f185b2f863c6f656c6e24ee026a.scope - libcontainer container b4466c959f215316cf228b41ddc15e30b72f8f185b2f863c6f656c6e24ee026a. Mar 7 02:09:29.753380 containerd[1463]: time="2026-03-07T02:09:29.753270199Z" level=info msg="StartContainer for \"92cfdd9fc2e5049b0e8354471c78a6465c65dd22cca0ad6dd37ee52171eb7a25\" returns successfully" Mar 7 02:09:29.753630 containerd[1463]: time="2026-03-07T02:09:29.753538395Z" level=info msg="StartContainer for \"3f57818ffb484fd53cd73f9c2fb804a1742e43b07963e41c61af0e25366f7fef\" returns successfully" Mar 7 02:09:29.764110 containerd[1463]: time="2026-03-07T02:09:29.764028667Z" level=info msg="StartContainer for \"b4466c959f215316cf228b41ddc15e30b72f8f185b2f863c6f656c6e24ee026a\" returns successfully" Mar 7 02:09:30.085089 kubelet[2136]: I0307 02:09:30.085018 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 02:09:30.552505 kubelet[2136]: E0307 02:09:30.551936 2136 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:09:30.552505 kubelet[2136]: E0307 02:09:30.552079 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:30.558536 kubelet[2136]: E0307 02:09:30.558341 2136 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:09:30.559056 kubelet[2136]: E0307 02:09:30.558713 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:30.563670 kubelet[2136]: E0307 02:09:30.563654 2136 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:09:30.563803 kubelet[2136]: E0307 02:09:30.563790 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:30.912438 kubelet[2136]: E0307 02:09:30.911878 2136 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 02:09:30.997871 kubelet[2136]: I0307 02:09:30.995893 2136 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 02:09:31.015002 kubelet[2136]: I0307 02:09:31.014934 2136 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:31.029963 kubelet[2136]: E0307 02:09:31.029894 2136 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:31.030083 kubelet[2136]: I0307 02:09:31.029994 2136 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:31.032210 kubelet[2136]: E0307 02:09:31.032142 2136 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:31.032210 kubelet[2136]: I0307 02:09:31.032201 2136 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:31.034196 kubelet[2136]: E0307 02:09:31.034176 2136 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:31.502735 kubelet[2136]: I0307 02:09:31.502634 2136 apiserver.go:52] "Watching apiserver" Mar 7 02:09:31.514599 kubelet[2136]: I0307 02:09:31.514488 2136 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:09:31.573052 kubelet[2136]: I0307 02:09:31.572976 2136 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:31.573052 kubelet[2136]: I0307 02:09:31.573047 2136 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:31.575015 kubelet[2136]: E0307 02:09:31.574954 2136 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:31.575225 kubelet[2136]: E0307 02:09:31.575124 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:31.575225 kubelet[2136]: E0307 02:09:31.575165 2136 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:31.575345 kubelet[2136]: E0307 02:09:31.575292 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:33.031447 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-7.scope)... Mar 7 02:09:33.031488 systemd[1]: Reloading... Mar 7 02:09:33.125985 zram_generator::config[2469]: No configuration found. Mar 7 02:09:33.242053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:09:33.336515 systemd[1]: Reloading finished in 304 ms. Mar 7 02:09:33.397755 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:33.427730 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:09:33.428163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:33.428263 systemd[1]: kubelet.service: Consumed 1.012s CPU time, 128.7M memory peak, 0B memory swap peak. Mar 7 02:09:33.436600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:09:33.612443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:09:33.619150 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:09:33.675850 kubelet[2513]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 02:09:33.676189 kubelet[2513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:09:33.676189 kubelet[2513]: I0307 02:09:33.676025 2513 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 02:09:33.682374 kubelet[2513]: I0307 02:09:33.682328 2513 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 02:09:33.682374 kubelet[2513]: I0307 02:09:33.682361 2513 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:09:33.682460 kubelet[2513]: I0307 02:09:33.682385 2513 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:09:33.682460 kubelet[2513]: I0307 02:09:33.682396 2513 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:09:33.682549 kubelet[2513]: I0307 02:09:33.682530 2513 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 02:09:33.683593 kubelet[2513]: I0307 02:09:33.683559 2513 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 02:09:33.685572 kubelet[2513]: I0307 02:09:33.685509 2513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:09:33.688364 kubelet[2513]: E0307 02:09:33.688320 2513 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:09:33.688364 kubelet[2513]: I0307 02:09:33.688354 2513 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:09:33.694454 kubelet[2513]: I0307 02:09:33.694401 2513 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:09:33.694707 kubelet[2513]: I0307 02:09:33.694610 2513 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:09:33.694858 kubelet[2513]: I0307 02:09:33.694681 2513 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:09:33.694858 kubelet[2513]: I0307 02:09:33.694798 2513 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 02:09:33.694858 kubelet[2513]: I0307 02:09:33.694807 2513 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 02:09:33.694992 kubelet[2513]: I0307 02:09:33.694917 2513 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:09:33.695169 kubelet[2513]: I0307 02:09:33.695120 2513 state_mem.go:36] "Initialized new in-memory state store" Mar 7 02:09:33.695554 kubelet[2513]: I0307 02:09:33.695395 2513 kubelet.go:475] "Attempting to sync node with API server" Mar 7 02:09:33.695554 kubelet[2513]: I0307 02:09:33.695427 2513 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:09:33.695554 kubelet[2513]: I0307 02:09:33.695459 2513 kubelet.go:387] "Adding apiserver pod source" Mar 7 02:09:33.695554 kubelet[2513]: I0307 02:09:33.695475 2513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:09:33.697007 kubelet[2513]: I0307 02:09:33.696975 2513 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:09:33.697531 kubelet[2513]: I0307 02:09:33.697480 2513 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:09:33.697614 kubelet[2513]: I0307 02:09:33.697543 2513 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:09:33.702689 kubelet[2513]: I0307 02:09:33.702586 2513 server.go:1262] "Started kubelet" Mar 7 02:09:33.704166 kubelet[2513]: I0307 02:09:33.704120 2513 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:09:33.704207 kubelet[2513]: I0307 02:09:33.704183 2513 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:09:33.704462 kubelet[2513]: I0307 02:09:33.704428 2513 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:09:33.705233 kubelet[2513]: I0307 02:09:33.704608 2513 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:09:33.705233 kubelet[2513]: I0307 02:09:33.705113 2513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 02:09:33.706592 kubelet[2513]: I0307 02:09:33.706521 2513 server.go:310] "Adding debug handlers to kubelet server" Mar 7 02:09:33.708672 kubelet[2513]: I0307 02:09:33.708600 2513 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 02:09:33.708746 kubelet[2513]: I0307 02:09:33.708700 2513 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:09:33.708904 kubelet[2513]: I0307 02:09:33.708860 2513 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:09:33.712934 kubelet[2513]: I0307 02:09:33.712425 2513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:09:33.713002 kubelet[2513]: I0307 02:09:33.712964 2513 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:09:33.713062 kubelet[2513]: I0307 02:09:33.713042 2513 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:09:33.713118 kubelet[2513]: E0307 02:09:33.713089 2513 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:09:33.715679 kubelet[2513]: I0307 02:09:33.715595 2513 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:09:33.734043 kubelet[2513]: I0307 02:09:33.734013 2513 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:09:33.736860 kubelet[2513]: I0307 02:09:33.735923 2513 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:09:33.736860 kubelet[2513]: I0307 02:09:33.735985 2513 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 02:09:33.736860 kubelet[2513]: I0307 02:09:33.736007 2513 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 02:09:33.736860 kubelet[2513]: E0307 02:09:33.736046 2513 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:09:33.763103 kubelet[2513]: I0307 02:09:33.763035 2513 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 02:09:33.763103 kubelet[2513]: I0307 02:09:33.763071 2513 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 02:09:33.763103 kubelet[2513]: I0307 02:09:33.763088 2513 state_mem.go:36] "Initialized new in-memory state store" Mar 7 02:09:33.763289 kubelet[2513]: I0307 02:09:33.763202 2513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 02:09:33.763289 kubelet[2513]: I0307 02:09:33.763210 2513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 02:09:33.763289 kubelet[2513]: I0307 02:09:33.763225 2513 policy_none.go:49] "None policy: Start" Mar 7 02:09:33.763289 kubelet[2513]: I0307 02:09:33.763234 2513 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:09:33.763289 kubelet[2513]: I0307 02:09:33.763243 2513 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:09:33.763388 kubelet[2513]: I0307 02:09:33.763338 2513 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 02:09:33.763388 kubelet[2513]: I0307 02:09:33.763347 2513 policy_none.go:47] "Start" Mar 7 02:09:33.769872 kubelet[2513]: E0307 02:09:33.769790 2513 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:09:33.770917 kubelet[2513]: I0307 02:09:33.770269 2513 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 02:09:33.770917 kubelet[2513]: I0307 02:09:33.770303 2513 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:09:33.771243 kubelet[2513]: I0307 02:09:33.771225 2513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 02:09:33.773900 kubelet[2513]: E0307 02:09:33.773413 2513 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:09:33.838025 kubelet[2513]: I0307 02:09:33.837935 2513 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:33.838448 kubelet[2513]: I0307 02:09:33.838318 2513 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:33.838448 kubelet[2513]: I0307 02:09:33.838011 2513 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:33.879413 kubelet[2513]: I0307 02:09:33.879237 2513 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 02:09:33.888501 kubelet[2513]: I0307 02:09:33.888391 2513 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 02:09:33.888501 kubelet[2513]: I0307 02:09:33.888480 2513 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 02:09:34.010858 kubelet[2513]: I0307 02:09:34.010578 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:34.010858 kubelet[2513]: I0307 02:09:34.010680 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:34.010858 kubelet[2513]: I0307 02:09:34.010706 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:34.010858 kubelet[2513]: I0307 02:09:34.010732 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:34.010858 kubelet[2513]: I0307 02:09:34.010755 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:09:34.011083 kubelet[2513]: I0307 02:09:34.010781 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abd37d360f2aaf839ebfe43151f3c676-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abd37d360f2aaf839ebfe43151f3c676\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:34.011083 kubelet[2513]: I0307 02:09:34.010804 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abd37d360f2aaf839ebfe43151f3c676-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abd37d360f2aaf839ebfe43151f3c676\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:34.011083 kubelet[2513]: I0307 02:09:34.010887 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abd37d360f2aaf839ebfe43151f3c676-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abd37d360f2aaf839ebfe43151f3c676\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:34.011083 kubelet[2513]: I0307 02:09:34.010911 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:09:34.147170 kubelet[2513]: E0307 02:09:34.146943 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:34.150529 kubelet[2513]: E0307 02:09:34.150409 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:34.150700 kubelet[2513]: E0307 02:09:34.150563 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:34.696870 kubelet[2513]: I0307 02:09:34.696731 2513 apiserver.go:52] "Watching apiserver" Mar 7 02:09:34.710183 kubelet[2513]: I0307 02:09:34.710133 2513 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:09:34.753468 kubelet[2513]: E0307 02:09:34.753393 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:34.754532 kubelet[2513]: E0307 02:09:34.754425 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:34.755132 kubelet[2513]: I0307 02:09:34.754946 2513 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:34.766898 kubelet[2513]: E0307 02:09:34.766762 2513 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 02:09:34.767324 kubelet[2513]: E0307 02:09:34.767060 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:34.798857 kubelet[2513]: I0307 02:09:34.797210 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.797186635 podStartE2EDuration="1.797186635s" podCreationTimestamp="2026-03-07 02:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:09:34.781466163 +0000 UTC m=+1.156981289" watchObservedRunningTime="2026-03-07 02:09:34.797186635 +0000 UTC m=+1.172701760" Mar 7 02:09:34.810244 kubelet[2513]: I0307 02:09:34.810147 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.81013323 podStartE2EDuration="1.81013323s" podCreationTimestamp="2026-03-07 02:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:09:34.80996187 +0000 UTC m=+1.185476996" watchObservedRunningTime="2026-03-07 02:09:34.81013323 +0000 UTC m=+1.185648366" Mar 7 02:09:34.810417 kubelet[2513]: I0307 02:09:34.810283 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8102716079999999 podStartE2EDuration="1.810271608s" podCreationTimestamp="2026-03-07 02:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:09:34.797459814 +0000 UTC m=+1.172974940" watchObservedRunningTime="2026-03-07 02:09:34.810271608 +0000 UTC m=+1.185786734" Mar 7 02:09:35.754790 kubelet[2513]: E0307 02:09:35.754467 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:35.754790 kubelet[2513]: E0307 02:09:35.754715 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:38.251671 kubelet[2513]: E0307 02:09:38.251531 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:38.537886 kubelet[2513]: I0307 02:09:38.537669 2513 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 02:09:38.538403 containerd[1463]: time="2026-03-07T02:09:38.538260239Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 02:09:38.539009 kubelet[2513]: I0307 02:09:38.538445 2513 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 02:09:38.811930 kubelet[2513]: E0307 02:09:38.811286 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:39.578473 systemd[1]: Created slice kubepods-besteffort-pod7f89c972_7965_4404_a061_e7bd576a5440.slice - libcontainer container kubepods-besteffort-pod7f89c972_7965_4404_a061_e7bd576a5440.slice. Mar 7 02:09:39.748785 systemd[1]: Created slice kubepods-besteffort-podd99811ee_54a5_4282_9ac3_568ca40c2516.slice - libcontainer container kubepods-besteffort-podd99811ee_54a5_4282_9ac3_568ca40c2516.slice. Mar 7 02:09:39.752024 kubelet[2513]: I0307 02:09:39.751945 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f89c972-7965-4404-a061-e7bd576a5440-lib-modules\") pod \"kube-proxy-wzdnb\" (UID: \"7f89c972-7965-4404-a061-e7bd576a5440\") " pod="kube-system/kube-proxy-wzdnb" Mar 7 02:09:39.752024 kubelet[2513]: I0307 02:09:39.751994 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lbn\" (UniqueName: \"kubernetes.io/projected/7f89c972-7965-4404-a061-e7bd576a5440-kube-api-access-98lbn\") pod \"kube-proxy-wzdnb\" (UID: \"7f89c972-7965-4404-a061-e7bd576a5440\") " pod="kube-system/kube-proxy-wzdnb" Mar 7 02:09:39.752024 kubelet[2513]: I0307 02:09:39.752014 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f89c972-7965-4404-a061-e7bd576a5440-kube-proxy\") pod \"kube-proxy-wzdnb\" (UID: \"7f89c972-7965-4404-a061-e7bd576a5440\") " pod="kube-system/kube-proxy-wzdnb" Mar 7 02:09:39.752024 kubelet[2513]: I0307 02:09:39.752027 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f89c972-7965-4404-a061-e7bd576a5440-xtables-lock\") pod \"kube-proxy-wzdnb\" (UID: \"7f89c972-7965-4404-a061-e7bd576a5440\") " pod="kube-system/kube-proxy-wzdnb" Mar 7 02:09:39.853304 kubelet[2513]: I0307 02:09:39.853118 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjt7\" (UniqueName: \"kubernetes.io/projected/d99811ee-54a5-4282-9ac3-568ca40c2516-kube-api-access-dbjt7\") pod \"tigera-operator-5588576f44-q4r4j\" (UID: \"d99811ee-54a5-4282-9ac3-568ca40c2516\") " pod="tigera-operator/tigera-operator-5588576f44-q4r4j" Mar 7 02:09:39.853304 kubelet[2513]: I0307 02:09:39.853231 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d99811ee-54a5-4282-9ac3-568ca40c2516-var-lib-calico\") pod \"tigera-operator-5588576f44-q4r4j\" (UID: \"d99811ee-54a5-4282-9ac3-568ca40c2516\") " pod="tigera-operator/tigera-operator-5588576f44-q4r4j" Mar 7 02:09:39.895313 kubelet[2513]: E0307 02:09:39.895161 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:39.896242 containerd[1463]: time="2026-03-07T02:09:39.896185183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzdnb,Uid:7f89c972-7965-4404-a061-e7bd576a5440,Namespace:kube-system,Attempt:0,}" Mar 7 02:09:39.932294 containerd[1463]: time="2026-03-07T02:09:39.931220149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:39.932294 containerd[1463]: time="2026-03-07T02:09:39.932011125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:39.932294 containerd[1463]: time="2026-03-07T02:09:39.932025201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:39.932294 containerd[1463]: time="2026-03-07T02:09:39.932136078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:39.962903 systemd[1]: Started cri-containerd-b12cb19d3ad8a9b440172edcbc0abf8c77a411d7bd9f0a21bb5d641cdefd5e95.scope - libcontainer container b12cb19d3ad8a9b440172edcbc0abf8c77a411d7bd9f0a21bb5d641cdefd5e95. Mar 7 02:09:39.995551 containerd[1463]: time="2026-03-07T02:09:39.995429553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzdnb,Uid:7f89c972-7965-4404-a061-e7bd576a5440,Namespace:kube-system,Attempt:0,} returns sandbox id \"b12cb19d3ad8a9b440172edcbc0abf8c77a411d7bd9f0a21bb5d641cdefd5e95\"" Mar 7 02:09:39.997570 kubelet[2513]: E0307 02:09:39.996535 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:40.003727 containerd[1463]: time="2026-03-07T02:09:40.003614746Z" level=info msg="CreateContainer within sandbox \"b12cb19d3ad8a9b440172edcbc0abf8c77a411d7bd9f0a21bb5d641cdefd5e95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 02:09:40.021954 containerd[1463]: time="2026-03-07T02:09:40.021799422Z" level=info msg="CreateContainer within sandbox \"b12cb19d3ad8a9b440172edcbc0abf8c77a411d7bd9f0a21bb5d641cdefd5e95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f6ab41cb2c8ff34a4de652c4d915a45c452ef6511d3fb6a42375b1ac2266de7\"" Mar 7 02:09:40.022739 containerd[1463]: time="2026-03-07T02:09:40.022703515Z" level=info msg="StartContainer for \"1f6ab41cb2c8ff34a4de652c4d915a45c452ef6511d3fb6a42375b1ac2266de7\"" Mar 7 02:09:40.058120 containerd[1463]: time="2026-03-07T02:09:40.058004689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-q4r4j,Uid:d99811ee-54a5-4282-9ac3-568ca40c2516,Namespace:tigera-operator,Attempt:0,}" Mar 7 02:09:40.063034 systemd[1]: Started cri-containerd-1f6ab41cb2c8ff34a4de652c4d915a45c452ef6511d3fb6a42375b1ac2266de7.scope - libcontainer container 1f6ab41cb2c8ff34a4de652c4d915a45c452ef6511d3fb6a42375b1ac2266de7. Mar 7 02:09:40.096938 containerd[1463]: time="2026-03-07T02:09:40.095795107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:40.096938 containerd[1463]: time="2026-03-07T02:09:40.095928756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:40.096938 containerd[1463]: time="2026-03-07T02:09:40.095955456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:40.096938 containerd[1463]: time="2026-03-07T02:09:40.096081131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:40.099923 containerd[1463]: time="2026-03-07T02:09:40.099631283Z" level=info msg="StartContainer for \"1f6ab41cb2c8ff34a4de652c4d915a45c452ef6511d3fb6a42375b1ac2266de7\" returns successfully" Mar 7 02:09:40.127012 systemd[1]: Started cri-containerd-5f95d2eb7e1f2bd82fa012d0dc9f073e486a0e19f906cff9565695edbebd21d9.scope - libcontainer container 5f95d2eb7e1f2bd82fa012d0dc9f073e486a0e19f906cff9565695edbebd21d9. Mar 7 02:09:40.175803 containerd[1463]: time="2026-03-07T02:09:40.175720343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-q4r4j,Uid:d99811ee-54a5-4282-9ac3-568ca40c2516,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5f95d2eb7e1f2bd82fa012d0dc9f073e486a0e19f906cff9565695edbebd21d9\"" Mar 7 02:09:40.182231 containerd[1463]: time="2026-03-07T02:09:40.182200599Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 02:09:40.769269 kubelet[2513]: E0307 02:09:40.769147 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:41.057278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297902644.mount: Deactivated successfully. Mar 7 02:09:42.007918 containerd[1463]: time="2026-03-07T02:09:42.007687247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:42.008898 containerd[1463]: time="2026-03-07T02:09:42.008808563Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 02:09:42.010379 containerd[1463]: time="2026-03-07T02:09:42.010284086Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:42.014384 containerd[1463]: time="2026-03-07T02:09:42.014287006Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:42.016025 containerd[1463]: time="2026-03-07T02:09:42.015931918Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.833545122s" Mar 7 02:09:42.016025 containerd[1463]: time="2026-03-07T02:09:42.016002800Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 02:09:42.023038 containerd[1463]: time="2026-03-07T02:09:42.022975545Z" level=info msg="CreateContainer within sandbox \"5f95d2eb7e1f2bd82fa012d0dc9f073e486a0e19f906cff9565695edbebd21d9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 02:09:42.039315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907855739.mount: Deactivated successfully. Mar 7 02:09:42.040430 containerd[1463]: time="2026-03-07T02:09:42.040324453Z" level=info msg="CreateContainer within sandbox \"5f95d2eb7e1f2bd82fa012d0dc9f073e486a0e19f906cff9565695edbebd21d9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0e56537523f132e9e72725cd6f6ff23370147b09f2cdce60c563e3c71a12e8f0\"" Mar 7 02:09:42.042596 containerd[1463]: time="2026-03-07T02:09:42.041463247Z" level=info msg="StartContainer for \"0e56537523f132e9e72725cd6f6ff23370147b09f2cdce60c563e3c71a12e8f0\"" Mar 7 02:09:42.097338 systemd[1]: Started cri-containerd-0e56537523f132e9e72725cd6f6ff23370147b09f2cdce60c563e3c71a12e8f0.scope - libcontainer container 0e56537523f132e9e72725cd6f6ff23370147b09f2cdce60c563e3c71a12e8f0. Mar 7 02:09:42.138133 containerd[1463]: time="2026-03-07T02:09:42.138013898Z" level=info msg="StartContainer for \"0e56537523f132e9e72725cd6f6ff23370147b09f2cdce60c563e3c71a12e8f0\" returns successfully" Mar 7 02:09:42.601799 kubelet[2513]: E0307 02:09:42.601677 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:42.617788 kubelet[2513]: I0307 02:09:42.617619 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzdnb" podStartSLOduration=3.617598615 podStartE2EDuration="3.617598615s" podCreationTimestamp="2026-03-07 02:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:09:40.781086973 +0000 UTC m=+7.156602109" watchObservedRunningTime="2026-03-07 02:09:42.617598615 +0000 UTC m=+8.993113741" Mar 7 02:09:42.776528 kubelet[2513]: E0307 02:09:42.776462 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:46.513557 sudo[1641]: pam_unix(sudo:session): session closed for user root Mar 7 02:09:46.517320 sshd[1638]: pam_unix(sshd:session): session closed for user core Mar 7 02:09:46.522497 systemd[1]: sshd@6-10.0.0.4:22-10.0.0.1:34848.service: Deactivated successfully. Mar 7 02:09:46.527389 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 02:09:46.529005 systemd[1]: session-7.scope: Consumed 5.855s CPU time, 161.9M memory peak, 0B memory swap peak. Mar 7 02:09:46.531423 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Mar 7 02:09:46.534941 systemd-logind[1453]: Removed session 7. Mar 7 02:09:48.265500 kubelet[2513]: E0307 02:09:48.265446 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:48.317022 kubelet[2513]: I0307 02:09:48.316275 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-q4r4j" podStartSLOduration=7.478104396 podStartE2EDuration="9.316257687s" podCreationTimestamp="2026-03-07 02:09:39 +0000 UTC" firstStartedPulling="2026-03-07 02:09:40.179198507 +0000 UTC m=+6.554713643" lastFinishedPulling="2026-03-07 02:09:42.017351808 +0000 UTC m=+8.392866934" observedRunningTime="2026-03-07 02:09:42.803952968 +0000 UTC m=+9.179468095" watchObservedRunningTime="2026-03-07 02:09:48.316257687 +0000 UTC m=+14.691772832" Mar 7 02:09:48.803767 kubelet[2513]: E0307 02:09:48.803191 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:48.822021 kubelet[2513]: E0307 02:09:48.821513 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:49.214414 systemd[1]: Created slice kubepods-besteffort-pod1233c502_7f5e_4776_8390_8c4c7532b33a.slice - libcontainer container kubepods-besteffort-pod1233c502_7f5e_4776_8390_8c4c7532b33a.slice. Mar 7 02:09:49.336387 kubelet[2513]: I0307 02:09:49.336313 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-lib-modules\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.336387 kubelet[2513]: I0307 02:09:49.336377 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-bpffs\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.336903 kubelet[2513]: I0307 02:09:49.336395 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-cni-net-dir\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.336903 kubelet[2513]: I0307 02:09:49.336418 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1233c502-7f5e-4776-8390-8c4c7532b33a-typha-certs\") pod \"calico-typha-65dbbbb67c-99f8f\" (UID: \"1233c502-7f5e-4776-8390-8c4c7532b33a\") " pod="calico-system/calico-typha-65dbbbb67c-99f8f" Mar 7 02:09:49.336903 kubelet[2513]: I0307 02:09:49.336431 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-cni-bin-dir\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.336903 kubelet[2513]: I0307 02:09:49.336447 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-cni-log-dir\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.336903 kubelet[2513]: I0307 02:09:49.336461 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1233c502-7f5e-4776-8390-8c4c7532b33a-tigera-ca-bundle\") pod \"calico-typha-65dbbbb67c-99f8f\" (UID: \"1233c502-7f5e-4776-8390-8c4c7532b33a\") " pod="calico-system/calico-typha-65dbbbb67c-99f8f" Mar 7 02:09:49.337075 kubelet[2513]: I0307 02:09:49.336476 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq7sg\" (UniqueName: \"kubernetes.io/projected/1233c502-7f5e-4776-8390-8c4c7532b33a-kube-api-access-jq7sg\") pod \"calico-typha-65dbbbb67c-99f8f\" (UID: \"1233c502-7f5e-4776-8390-8c4c7532b33a\") " pod="calico-system/calico-typha-65dbbbb67c-99f8f" Mar 7 02:09:49.337075 kubelet[2513]: I0307 02:09:49.336490 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-flexvol-driver-host\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.341066 systemd[1]: Created slice kubepods-besteffort-podbdaef1bd_0601_41c7_95e3_766fbc0e8104.slice - libcontainer container kubepods-besteffort-podbdaef1bd_0601_41c7_95e3_766fbc0e8104.slice. Mar 7 02:09:49.437212 kubelet[2513]: I0307 02:09:49.437000 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bdaef1bd-0601-41c7-95e3-766fbc0e8104-node-certs\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437212 kubelet[2513]: I0307 02:09:49.437104 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vv6w\" (UniqueName: \"kubernetes.io/projected/bdaef1bd-0601-41c7-95e3-766fbc0e8104-kube-api-access-4vv6w\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437212 kubelet[2513]: I0307 02:09:49.437174 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdaef1bd-0601-41c7-95e3-766fbc0e8104-tigera-ca-bundle\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437212 kubelet[2513]: I0307 02:09:49.437200 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-xtables-lock\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437412 kubelet[2513]: I0307 02:09:49.437250 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-var-run-calico\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437412 kubelet[2513]: I0307 02:09:49.437275 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-nodeproc\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437412 kubelet[2513]: I0307 02:09:49.437298 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-var-lib-calico\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437526 kubelet[2513]: I0307 02:09:49.437426 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-policysync\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.437526 kubelet[2513]: I0307 02:09:49.437455 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/bdaef1bd-0601-41c7-95e3-766fbc0e8104-sys-fs\") pod \"calico-node-2ng8d\" (UID: \"bdaef1bd-0601-41c7-95e3-766fbc0e8104\") " pod="calico-system/calico-node-2ng8d" Mar 7 02:09:49.443086 kubelet[2513]: E0307 02:09:49.442660 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:09:49.464350 kubelet[2513]: E0307 02:09:49.464231 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.464350 kubelet[2513]: W0307 02:09:49.464284 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.464350 kubelet[2513]: E0307 02:09:49.464311 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.468662 kubelet[2513]: E0307 02:09:49.468283 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.468662 kubelet[2513]: W0307 02:09:49.468341 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.468662 kubelet[2513]: E0307 02:09:49.468371 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.526370 kubelet[2513]: E0307 02:09:49.526275 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:49.527207 containerd[1463]: time="2026-03-07T02:09:49.527121073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65dbbbb67c-99f8f,Uid:1233c502-7f5e-4776-8390-8c4c7532b33a,Namespace:calico-system,Attempt:0,}" Mar 7 02:09:49.538498 kubelet[2513]: E0307 02:09:49.538441 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.538498 kubelet[2513]: W0307 02:09:49.538491 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.538699 kubelet[2513]: E0307 02:09:49.538518 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.539358 kubelet[2513]: E0307 02:09:49.539310 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.539358 kubelet[2513]: W0307 02:09:49.539337 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.539462 kubelet[2513]: E0307 02:09:49.539366 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.540087 kubelet[2513]: E0307 02:09:49.540047 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.540087 kubelet[2513]: W0307 02:09:49.540064 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.540087 kubelet[2513]: E0307 02:09:49.540079 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.540964 kubelet[2513]: E0307 02:09:49.540923 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.541027 kubelet[2513]: W0307 02:09:49.540963 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.541027 kubelet[2513]: E0307 02:09:49.540979 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.542194 kubelet[2513]: E0307 02:09:49.542123 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.542194 kubelet[2513]: W0307 02:09:49.542138 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.542194 kubelet[2513]: E0307 02:09:49.542153 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.542806 kubelet[2513]: E0307 02:09:49.542759 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.542966 kubelet[2513]: W0307 02:09:49.542811 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.542966 kubelet[2513]: E0307 02:09:49.542920 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.543564 kubelet[2513]: E0307 02:09:49.543523 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.543659 kubelet[2513]: W0307 02:09:49.543568 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.543659 kubelet[2513]: E0307 02:09:49.543629 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.546110 kubelet[2513]: E0307 02:09:49.546062 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.546110 kubelet[2513]: W0307 02:09:49.546104 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.546226 kubelet[2513]: E0307 02:09:49.546123 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.546951 kubelet[2513]: E0307 02:09:49.546645 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.546951 kubelet[2513]: W0307 02:09:49.546670 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.546951 kubelet[2513]: E0307 02:09:49.546691 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.547236 kubelet[2513]: E0307 02:09:49.547221 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.547322 kubelet[2513]: W0307 02:09:49.547305 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.547520 kubelet[2513]: E0307 02:09:49.547389 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.547881 kubelet[2513]: E0307 02:09:49.547803 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.547938 kubelet[2513]: W0307 02:09:49.547898 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.547938 kubelet[2513]: E0307 02:09:49.547916 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.548483 kubelet[2513]: E0307 02:09:49.548281 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.548483 kubelet[2513]: W0307 02:09:49.548298 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.548483 kubelet[2513]: E0307 02:09:49.548313 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.548763 kubelet[2513]: E0307 02:09:49.548728 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.548806 kubelet[2513]: W0307 02:09:49.548763 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.548806 kubelet[2513]: E0307 02:09:49.548779 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.549233 kubelet[2513]: E0307 02:09:49.549200 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.549285 kubelet[2513]: W0307 02:09:49.549234 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.549285 kubelet[2513]: E0307 02:09:49.549248 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.550051 kubelet[2513]: E0307 02:09:49.549990 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.550051 kubelet[2513]: W0307 02:09:49.550034 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.550051 kubelet[2513]: E0307 02:09:49.550049 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.550716 kubelet[2513]: E0307 02:09:49.550679 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.550716 kubelet[2513]: W0307 02:09:49.550714 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.550804 kubelet[2513]: E0307 02:09:49.550730 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.551386 kubelet[2513]: E0307 02:09:49.551349 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.551436 kubelet[2513]: W0307 02:09:49.551386 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.551436 kubelet[2513]: E0307 02:09:49.551402 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.552033 kubelet[2513]: E0307 02:09:49.551999 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.552033 kubelet[2513]: W0307 02:09:49.552031 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.552129 kubelet[2513]: E0307 02:09:49.552045 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.552579 kubelet[2513]: E0307 02:09:49.552540 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.552579 kubelet[2513]: W0307 02:09:49.552576 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.552579 kubelet[2513]: E0307 02:09:49.552630 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.553244 kubelet[2513]: E0307 02:09:49.553177 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.553244 kubelet[2513]: W0307 02:09:49.553194 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.553244 kubelet[2513]: E0307 02:09:49.553207 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.553700 kubelet[2513]: E0307 02:09:49.553660 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.553700 kubelet[2513]: W0307 02:09:49.553697 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.553700 kubelet[2513]: E0307 02:09:49.553713 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.554306 kubelet[2513]: E0307 02:09:49.554214 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.554306 kubelet[2513]: W0307 02:09:49.554252 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.554306 kubelet[2513]: E0307 02:09:49.554269 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.554714 kubelet[2513]: E0307 02:09:49.554690 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.554714 kubelet[2513]: W0307 02:09:49.554705 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.555011 kubelet[2513]: E0307 02:09:49.554719 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.555327 kubelet[2513]: E0307 02:09:49.555201 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.555327 kubelet[2513]: W0307 02:09:49.555241 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.555327 kubelet[2513]: E0307 02:09:49.555256 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.555743 kubelet[2513]: E0307 02:09:49.555684 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.555743 kubelet[2513]: W0307 02:09:49.555697 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.555743 kubelet[2513]: E0307 02:09:49.555709 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.556365 kubelet[2513]: E0307 02:09:49.556255 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.556365 kubelet[2513]: W0307 02:09:49.556295 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.556365 kubelet[2513]: E0307 02:09:49.556310 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.557192 kubelet[2513]: E0307 02:09:49.557112 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.557192 kubelet[2513]: W0307 02:09:49.557125 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.557192 kubelet[2513]: E0307 02:09:49.557139 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.558311 kubelet[2513]: E0307 02:09:49.558121 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.558311 kubelet[2513]: W0307 02:09:49.558134 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.558311 kubelet[2513]: E0307 02:09:49.558147 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.561264 kubelet[2513]: E0307 02:09:49.561174 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.561264 kubelet[2513]: W0307 02:09:49.561194 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.561264 kubelet[2513]: E0307 02:09:49.561210 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.567908 kubelet[2513]: E0307 02:09:49.565630 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.567908 kubelet[2513]: W0307 02:09:49.565654 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.567908 kubelet[2513]: E0307 02:09:49.565754 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.567908 kubelet[2513]: E0307 02:09:49.566718 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.567908 kubelet[2513]: W0307 02:09:49.566732 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.567908 kubelet[2513]: E0307 02:09:49.566902 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.568515 kubelet[2513]: E0307 02:09:49.568421 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.568515 kubelet[2513]: W0307 02:09:49.568493 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.568649 kubelet[2513]: E0307 02:09:49.568530 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.573118 kubelet[2513]: E0307 02:09:49.573011 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.573202 kubelet[2513]: W0307 02:09:49.573144 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.573202 kubelet[2513]: E0307 02:09:49.573169 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.575443 kubelet[2513]: E0307 02:09:49.575150 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.575443 kubelet[2513]: W0307 02:09:49.575179 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.575443 kubelet[2513]: E0307 02:09:49.575206 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.576184 kubelet[2513]: E0307 02:09:49.575989 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.576184 kubelet[2513]: W0307 02:09:49.576006 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.576184 kubelet[2513]: E0307 02:09:49.576020 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.576511 kubelet[2513]: E0307 02:09:49.576495 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.576725 kubelet[2513]: W0307 02:09:49.576704 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.576930 kubelet[2513]: E0307 02:09:49.576797 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.577905 kubelet[2513]: E0307 02:09:49.577756 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.577905 kubelet[2513]: W0307 02:09:49.577776 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.577905 kubelet[2513]: E0307 02:09:49.577793 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.584907 kubelet[2513]: E0307 02:09:49.582144 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.584907 kubelet[2513]: W0307 02:09:49.582167 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.584907 kubelet[2513]: E0307 02:09:49.582188 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.585224 containerd[1463]: time="2026-03-07T02:09:49.584172314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:49.585224 containerd[1463]: time="2026-03-07T02:09:49.584235682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:49.585224 containerd[1463]: time="2026-03-07T02:09:49.584253145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:49.585224 containerd[1463]: time="2026-03-07T02:09:49.584397394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:49.585374 kubelet[2513]: E0307 02:09:49.585183 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.585374 kubelet[2513]: W0307 02:09:49.585199 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.585374 kubelet[2513]: E0307 02:09:49.585218 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.586531 kubelet[2513]: E0307 02:09:49.586461 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.586531 kubelet[2513]: W0307 02:09:49.586488 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.586531 kubelet[2513]: E0307 02:09:49.586517 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.589043 kubelet[2513]: E0307 02:09:49.588904 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.589043 kubelet[2513]: W0307 02:09:49.589015 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.589043 kubelet[2513]: E0307 02:09:49.589038 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.589797 kubelet[2513]: E0307 02:09:49.589697 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.589797 kubelet[2513]: W0307 02:09:49.589712 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.589797 kubelet[2513]: E0307 02:09:49.589728 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.590227 kubelet[2513]: E0307 02:09:49.590196 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.590227 kubelet[2513]: W0307 02:09:49.590223 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.590361 kubelet[2513]: E0307 02:09:49.590235 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.590976 kubelet[2513]: E0307 02:09:49.590958 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.590976 kubelet[2513]: W0307 02:09:49.590969 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.590976 kubelet[2513]: E0307 02:09:49.590979 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.591733 kubelet[2513]: E0307 02:09:49.591323 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.591733 kubelet[2513]: W0307 02:09:49.591335 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.591733 kubelet[2513]: E0307 02:09:49.591344 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.591733 kubelet[2513]: E0307 02:09:49.591712 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.591733 kubelet[2513]: W0307 02:09:49.591724 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.591733 kubelet[2513]: E0307 02:09:49.591732 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.592233 kubelet[2513]: E0307 02:09:49.592111 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.592233 kubelet[2513]: W0307 02:09:49.592121 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.592233 kubelet[2513]: E0307 02:09:49.592132 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.593311 kubelet[2513]: E0307 02:09:49.592510 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.593311 kubelet[2513]: W0307 02:09:49.592521 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.593311 kubelet[2513]: E0307 02:09:49.592532 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.593311 kubelet[2513]: E0307 02:09:49.592941 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.593311 kubelet[2513]: W0307 02:09:49.592952 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.593311 kubelet[2513]: E0307 02:09:49.592963 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.593311 kubelet[2513]: E0307 02:09:49.593291 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.593311 kubelet[2513]: W0307 02:09:49.593301 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.593311 kubelet[2513]: E0307 02:09:49.593312 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.593808 kubelet[2513]: E0307 02:09:49.593705 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.593808 kubelet[2513]: W0307 02:09:49.593714 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.593808 kubelet[2513]: E0307 02:09:49.593725 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.594422 kubelet[2513]: E0307 02:09:49.594106 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.594422 kubelet[2513]: W0307 02:09:49.594120 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.594422 kubelet[2513]: E0307 02:09:49.594134 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.594656 kubelet[2513]: E0307 02:09:49.594578 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.594656 kubelet[2513]: W0307 02:09:49.594629 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.594656 kubelet[2513]: E0307 02:09:49.594643 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.595138 kubelet[2513]: E0307 02:09:49.595113 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.595138 kubelet[2513]: W0307 02:09:49.595127 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.595138 kubelet[2513]: E0307 02:09:49.595140 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.596149 kubelet[2513]: E0307 02:09:49.596073 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.596149 kubelet[2513]: W0307 02:09:49.596106 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.596149 kubelet[2513]: E0307 02:09:49.596121 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.597174 kubelet[2513]: E0307 02:09:49.597062 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.597174 kubelet[2513]: W0307 02:09:49.597078 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.597755 kubelet[2513]: E0307 02:09:49.597435 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.598078 kubelet[2513]: E0307 02:09:49.597954 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.598078 kubelet[2513]: W0307 02:09:49.597991 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.598078 kubelet[2513]: E0307 02:09:49.598007 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.598515 kubelet[2513]: E0307 02:09:49.598451 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.598515 kubelet[2513]: W0307 02:09:49.598490 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.598515 kubelet[2513]: E0307 02:09:49.598505 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.599073 kubelet[2513]: E0307 02:09:49.599055 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.599358 kubelet[2513]: W0307 02:09:49.599150 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.599358 kubelet[2513]: E0307 02:09:49.599171 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.599706 kubelet[2513]: E0307 02:09:49.599691 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.600037 kubelet[2513]: W0307 02:09:49.599773 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.600037 kubelet[2513]: E0307 02:09:49.599793 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.600236 kubelet[2513]: E0307 02:09:49.600219 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.600317 kubelet[2513]: W0307 02:09:49.600301 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.600388 kubelet[2513]: E0307 02:09:49.600373 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.600879 kubelet[2513]: E0307 02:09:49.600805 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.601015 kubelet[2513]: W0307 02:09:49.600999 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.601090 kubelet[2513]: E0307 02:09:49.601074 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.601516 kubelet[2513]: E0307 02:09:49.601494 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.601634 kubelet[2513]: W0307 02:09:49.601576 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.601710 kubelet[2513]: E0307 02:09:49.601694 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.602195 kubelet[2513]: E0307 02:09:49.602178 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.602469 kubelet[2513]: W0307 02:09:49.602268 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.602469 kubelet[2513]: E0307 02:09:49.602286 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.602727 kubelet[2513]: E0307 02:09:49.602711 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.602802 kubelet[2513]: W0307 02:09:49.602787 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.602957 kubelet[2513]: E0307 02:09:49.602940 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.603407 kubelet[2513]: E0307 02:09:49.603390 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.603490 kubelet[2513]: W0307 02:09:49.603475 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.603565 kubelet[2513]: E0307 02:09:49.603547 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.604323 kubelet[2513]: E0307 02:09:49.604131 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.604323 kubelet[2513]: W0307 02:09:49.604148 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.604323 kubelet[2513]: E0307 02:09:49.604161 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.604540 kubelet[2513]: E0307 02:09:49.604523 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.604677 kubelet[2513]: W0307 02:09:49.604657 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.604750 kubelet[2513]: E0307 02:09:49.604736 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.622145 systemd[1]: Started cri-containerd-d9444666e194a0ebf44238a7122edeea25f37f87087282c1558b66fd7d7e7e1f.scope - libcontainer container d9444666e194a0ebf44238a7122edeea25f37f87087282c1558b66fd7d7e7e1f. Mar 7 02:09:49.652065 containerd[1463]: time="2026-03-07T02:09:49.652011155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2ng8d,Uid:bdaef1bd-0601-41c7-95e3-766fbc0e8104,Namespace:calico-system,Attempt:0,}" Mar 7 02:09:49.706464 kubelet[2513]: E0307 02:09:49.706168 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.706464 kubelet[2513]: W0307 02:09:49.706202 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.706706 kubelet[2513]: E0307 02:09:49.706569 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.706706 kubelet[2513]: I0307 02:09:49.706652 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvd6\" (UniqueName: \"kubernetes.io/projected/635ed81a-2a01-425c-84a2-97d0d48a5575-kube-api-access-bsvd6\") pod \"csi-node-driver-z8n85\" (UID: \"635ed81a-2a01-425c-84a2-97d0d48a5575\") " pod="calico-system/csi-node-driver-z8n85" Mar 7 02:09:49.707260 kubelet[2513]: E0307 02:09:49.707193 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.707260 kubelet[2513]: W0307 02:09:49.707245 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.707369 kubelet[2513]: E0307 02:09:49.707263 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.707369 kubelet[2513]: I0307 02:09:49.707293 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/635ed81a-2a01-425c-84a2-97d0d48a5575-socket-dir\") pod \"csi-node-driver-z8n85\" (UID: \"635ed81a-2a01-425c-84a2-97d0d48a5575\") " pod="calico-system/csi-node-driver-z8n85" Mar 7 02:09:49.708070 kubelet[2513]: E0307 02:09:49.708027 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.708070 kubelet[2513]: W0307 02:09:49.708067 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.708181 kubelet[2513]: E0307 02:09:49.708084 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.708181 kubelet[2513]: I0307 02:09:49.708113 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/635ed81a-2a01-425c-84a2-97d0d48a5575-varrun\") pod \"csi-node-driver-z8n85\" (UID: \"635ed81a-2a01-425c-84a2-97d0d48a5575\") " pod="calico-system/csi-node-driver-z8n85" Mar 7 02:09:49.708771 kubelet[2513]: E0307 02:09:49.708742 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.709212 kubelet[2513]: W0307 02:09:49.708916 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.709212 kubelet[2513]: E0307 02:09:49.708952 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.709434 kubelet[2513]: I0307 02:09:49.709341 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/635ed81a-2a01-425c-84a2-97d0d48a5575-registration-dir\") pod \"csi-node-driver-z8n85\" (UID: \"635ed81a-2a01-425c-84a2-97d0d48a5575\") " pod="calico-system/csi-node-driver-z8n85" Mar 7 02:09:49.710069 kubelet[2513]: E0307 02:09:49.709625 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.710069 kubelet[2513]: W0307 02:09:49.709645 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.710069 kubelet[2513]: E0307 02:09:49.709660 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.710207 kubelet[2513]: E0307 02:09:49.710162 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.710207 kubelet[2513]: W0307 02:09:49.710174 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.710207 kubelet[2513]: E0307 02:09:49.710185 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.711200 kubelet[2513]: E0307 02:09:49.711142 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.711200 kubelet[2513]: W0307 02:09:49.711186 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.711200 kubelet[2513]: E0307 02:09:49.711201 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.711738 kubelet[2513]: E0307 02:09:49.711649 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.711738 kubelet[2513]: W0307 02:09:49.711681 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.711738 kubelet[2513]: E0307 02:09:49.711692 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.712949 kubelet[2513]: E0307 02:09:49.712905 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.712949 kubelet[2513]: W0307 02:09:49.712940 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.712949 kubelet[2513]: E0307 02:09:49.712951 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.713868 kubelet[2513]: I0307 02:09:49.713370 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/635ed81a-2a01-425c-84a2-97d0d48a5575-kubelet-dir\") pod \"csi-node-driver-z8n85\" (UID: \"635ed81a-2a01-425c-84a2-97d0d48a5575\") " pod="calico-system/csi-node-driver-z8n85" Mar 7 02:09:49.714905 kubelet[2513]: E0307 02:09:49.714663 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.714905 kubelet[2513]: W0307 02:09:49.714698 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.714905 kubelet[2513]: E0307 02:09:49.714768 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.716174 containerd[1463]: time="2026-03-07T02:09:49.714070664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:09:49.716174 containerd[1463]: time="2026-03-07T02:09:49.714165791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:09:49.716174 containerd[1463]: time="2026-03-07T02:09:49.714332021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:49.716357 kubelet[2513]: E0307 02:09:49.716166 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.716357 kubelet[2513]: W0307 02:09:49.716177 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.716357 kubelet[2513]: E0307 02:09:49.716187 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.716549 containerd[1463]: time="2026-03-07T02:09:49.715111640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:09:49.717309 containerd[1463]: time="2026-03-07T02:09:49.717103803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65dbbbb67c-99f8f,Uid:1233c502-7f5e-4776-8390-8c4c7532b33a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9444666e194a0ebf44238a7122edeea25f37f87087282c1558b66fd7d7e7e1f\"" Mar 7 02:09:49.717375 kubelet[2513]: E0307 02:09:49.717241 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.717375 kubelet[2513]: W0307 02:09:49.717251 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.717375 kubelet[2513]: E0307 02:09:49.717262 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.719274 kubelet[2513]: E0307 02:09:49.718308 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.719274 kubelet[2513]: W0307 02:09:49.718327 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.719274 kubelet[2513]: E0307 02:09:49.718342 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.721650 kubelet[2513]: E0307 02:09:49.721151 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:49.724425 kubelet[2513]: E0307 02:09:49.722988 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.724425 kubelet[2513]: W0307 02:09:49.723041 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.724425 kubelet[2513]: E0307 02:09:49.723065 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.725247 kubelet[2513]: E0307 02:09:49.724570 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.725247 kubelet[2513]: W0307 02:09:49.724638 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.725247 kubelet[2513]: E0307 02:09:49.724656 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.725449 containerd[1463]: time="2026-03-07T02:09:49.724563743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 02:09:49.761135 systemd[1]: Started cri-containerd-497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc.scope - libcontainer container 497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc. Mar 7 02:09:49.811013 containerd[1463]: time="2026-03-07T02:09:49.810917453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2ng8d,Uid:bdaef1bd-0601-41c7-95e3-766fbc0e8104,Namespace:calico-system,Attempt:0,} returns sandbox id \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\"" Mar 7 02:09:49.816424 kubelet[2513]: E0307 02:09:49.816379 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.816424 kubelet[2513]: W0307 02:09:49.816414 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.816654 kubelet[2513]: E0307 02:09:49.816435 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.817170 kubelet[2513]: E0307 02:09:49.817123 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.817170 kubelet[2513]: W0307 02:09:49.817164 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.817255 kubelet[2513]: E0307 02:09:49.817188 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.817738 kubelet[2513]: E0307 02:09:49.817653 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.817738 kubelet[2513]: W0307 02:09:49.817693 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.817738 kubelet[2513]: E0307 02:09:49.817711 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.818935 kubelet[2513]: E0307 02:09:49.818792 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.818935 kubelet[2513]: W0307 02:09:49.818866 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.818935 kubelet[2513]: E0307 02:09:49.818880 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.819263 kubelet[2513]: E0307 02:09:49.819202 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.819263 kubelet[2513]: W0307 02:09:49.819243 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.819263 kubelet[2513]: E0307 02:09:49.819258 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.819769 kubelet[2513]: E0307 02:09:49.819703 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.819769 kubelet[2513]: W0307 02:09:49.819747 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.819769 kubelet[2513]: E0307 02:09:49.819764 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.820265 kubelet[2513]: E0307 02:09:49.820202 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.820265 kubelet[2513]: W0307 02:09:49.820235 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.820265 kubelet[2513]: E0307 02:09:49.820252 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.820722 kubelet[2513]: E0307 02:09:49.820660 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.820722 kubelet[2513]: W0307 02:09:49.820691 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.820722 kubelet[2513]: E0307 02:09:49.820701 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.821361 kubelet[2513]: E0307 02:09:49.821098 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.821361 kubelet[2513]: W0307 02:09:49.821137 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.821361 kubelet[2513]: E0307 02:09:49.821155 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.822444 kubelet[2513]: E0307 02:09:49.821903 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.822444 kubelet[2513]: W0307 02:09:49.821930 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.822444 kubelet[2513]: E0307 02:09:49.821959 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.822444 kubelet[2513]: E0307 02:09:49.822414 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.822444 kubelet[2513]: W0307 02:09:49.822426 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.822444 kubelet[2513]: E0307 02:09:49.822438 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.823013 kubelet[2513]: E0307 02:09:49.822974 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.823013 kubelet[2513]: W0307 02:09:49.823003 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.823013 kubelet[2513]: E0307 02:09:49.823016 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.823490 kubelet[2513]: E0307 02:09:49.823421 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.823490 kubelet[2513]: W0307 02:09:49.823459 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.823490 kubelet[2513]: E0307 02:09:49.823473 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.824236 kubelet[2513]: E0307 02:09:49.824104 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.824236 kubelet[2513]: W0307 02:09:49.824128 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.824236 kubelet[2513]: E0307 02:09:49.824139 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.824759 kubelet[2513]: E0307 02:09:49.824540 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.824759 kubelet[2513]: W0307 02:09:49.824552 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.824759 kubelet[2513]: E0307 02:09:49.824562 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.827320 kubelet[2513]: E0307 02:09:49.827220 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.827320 kubelet[2513]: W0307 02:09:49.827269 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.827320 kubelet[2513]: E0307 02:09:49.827293 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.828018 kubelet[2513]: E0307 02:09:49.827920 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.828018 kubelet[2513]: W0307 02:09:49.827956 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.828018 kubelet[2513]: E0307 02:09:49.827974 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.828428 kubelet[2513]: E0307 02:09:49.828387 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.828428 kubelet[2513]: W0307 02:09:49.828423 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.828492 kubelet[2513]: E0307 02:09:49.828440 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.829068 kubelet[2513]: E0307 02:09:49.829043 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.829068 kubelet[2513]: W0307 02:09:49.829058 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.829068 kubelet[2513]: E0307 02:09:49.829070 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.829669 kubelet[2513]: E0307 02:09:49.829578 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.829669 kubelet[2513]: W0307 02:09:49.829649 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.829669 kubelet[2513]: E0307 02:09:49.829665 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.830205 kubelet[2513]: E0307 02:09:49.830146 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.830205 kubelet[2513]: W0307 02:09:49.830176 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.830205 kubelet[2513]: E0307 02:09:49.830192 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.830957 kubelet[2513]: E0307 02:09:49.830768 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.831054 kubelet[2513]: W0307 02:09:49.831016 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.831081 kubelet[2513]: E0307 02:09:49.831068 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.831976 kubelet[2513]: E0307 02:09:49.831949 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.831976 kubelet[2513]: W0307 02:09:49.831975 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.832047 kubelet[2513]: E0307 02:09:49.831989 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.832437 kubelet[2513]: E0307 02:09:49.832406 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.832477 kubelet[2513]: W0307 02:09:49.832439 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.832477 kubelet[2513]: E0307 02:09:49.832456 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.833201 kubelet[2513]: E0307 02:09:49.833172 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.833201 kubelet[2513]: W0307 02:09:49.833199 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.833296 kubelet[2513]: E0307 02:09:49.833213 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:49.846510 kubelet[2513]: E0307 02:09:49.846434 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:49.846510 kubelet[2513]: W0307 02:09:49.846486 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:49.846510 kubelet[2513]: E0307 02:09:49.846512 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:50.696477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778113967.mount: Deactivated successfully. Mar 7 02:09:51.676282 containerd[1463]: time="2026-03-07T02:09:51.676075242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:51.680342 containerd[1463]: time="2026-03-07T02:09:51.679665452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 02:09:51.682355 containerd[1463]: time="2026-03-07T02:09:51.682247015Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:51.687992 containerd[1463]: time="2026-03-07T02:09:51.687947629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:51.689397 containerd[1463]: time="2026-03-07T02:09:51.689114675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.964456125s" Mar 7 02:09:51.689397 containerd[1463]: time="2026-03-07T02:09:51.689171030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 02:09:51.692751 containerd[1463]: time="2026-03-07T02:09:51.692511624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 02:09:51.714689 containerd[1463]: time="2026-03-07T02:09:51.714572049Z" level=info msg="CreateContainer within sandbox \"d9444666e194a0ebf44238a7122edeea25f37f87087282c1558b66fd7d7e7e1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 02:09:51.737661 kubelet[2513]: E0307 02:09:51.737528 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:09:51.758618 containerd[1463]: time="2026-03-07T02:09:51.758193932Z" level=info msg="CreateContainer within sandbox \"d9444666e194a0ebf44238a7122edeea25f37f87087282c1558b66fd7d7e7e1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f807f9ed7613bf4970b57670cbc0b288bf789698a6177acbdfff5dfe8ad2dfe\"" Mar 7 02:09:51.762148 containerd[1463]: time="2026-03-07T02:09:51.761039431Z" level=info msg="StartContainer for \"4f807f9ed7613bf4970b57670cbc0b288bf789698a6177acbdfff5dfe8ad2dfe\"" Mar 7 02:09:51.822242 systemd[1]: Started cri-containerd-4f807f9ed7613bf4970b57670cbc0b288bf789698a6177acbdfff5dfe8ad2dfe.scope - libcontainer container 4f807f9ed7613bf4970b57670cbc0b288bf789698a6177acbdfff5dfe8ad2dfe. Mar 7 02:09:51.916768 containerd[1463]: time="2026-03-07T02:09:51.915488195Z" level=info msg="StartContainer for \"4f807f9ed7613bf4970b57670cbc0b288bf789698a6177acbdfff5dfe8ad2dfe\" returns successfully" Mar 7 02:09:52.834272 kubelet[2513]: E0307 02:09:52.833974 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:52.866017 kubelet[2513]: I0307 02:09:52.861673 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65dbbbb67c-99f8f" podStartSLOduration=1.893775009 podStartE2EDuration="3.861652076s" podCreationTimestamp="2026-03-07 02:09:49 +0000 UTC" firstStartedPulling="2026-03-07 02:09:49.724213281 +0000 UTC m=+16.099728407" lastFinishedPulling="2026-03-07 02:09:51.692090348 +0000 UTC m=+18.067605474" observedRunningTime="2026-03-07 02:09:52.858699875 +0000 UTC m=+19.234215041" watchObservedRunningTime="2026-03-07 02:09:52.861652076 +0000 UTC m=+19.237167212" Mar 7 02:09:52.930652 kubelet[2513]: E0307 02:09:52.930268 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.930652 kubelet[2513]: W0307 02:09:52.930331 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.930652 kubelet[2513]: E0307 02:09:52.930362 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.930982 kubelet[2513]: E0307 02:09:52.930784 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.930982 kubelet[2513]: W0307 02:09:52.930797 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.930982 kubelet[2513]: E0307 02:09:52.930883 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.932648 kubelet[2513]: E0307 02:09:52.932522 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.932648 kubelet[2513]: W0307 02:09:52.932571 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.932648 kubelet[2513]: E0307 02:09:52.932630 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.933667 kubelet[2513]: E0307 02:09:52.933627 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.933667 kubelet[2513]: W0307 02:09:52.933663 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.933768 kubelet[2513]: E0307 02:09:52.933680 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.934151 kubelet[2513]: E0307 02:09:52.934116 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.934151 kubelet[2513]: W0307 02:09:52.934149 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.934259 kubelet[2513]: E0307 02:09:52.934163 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.934543 kubelet[2513]: E0307 02:09:52.934496 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.934646 kubelet[2513]: W0307 02:09:52.934546 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.934646 kubelet[2513]: E0307 02:09:52.934562 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.935265 kubelet[2513]: E0307 02:09:52.935063 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.935265 kubelet[2513]: W0307 02:09:52.935080 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.935265 kubelet[2513]: E0307 02:09:52.935091 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.935520 kubelet[2513]: E0307 02:09:52.935482 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.935520 kubelet[2513]: W0307 02:09:52.935519 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.935669 kubelet[2513]: E0307 02:09:52.935533 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.936061 kubelet[2513]: E0307 02:09:52.935980 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.936061 kubelet[2513]: W0307 02:09:52.935993 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.936061 kubelet[2513]: E0307 02:09:52.936007 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.936343 kubelet[2513]: E0307 02:09:52.936311 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.936343 kubelet[2513]: W0307 02:09:52.936341 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.936434 kubelet[2513]: E0307 02:09:52.936355 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.936737 kubelet[2513]: E0307 02:09:52.936701 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.936803 kubelet[2513]: W0307 02:09:52.936740 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.936803 kubelet[2513]: E0307 02:09:52.936755 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.937201 kubelet[2513]: E0307 02:09:52.937158 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.937201 kubelet[2513]: W0307 02:09:52.937192 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.937305 kubelet[2513]: E0307 02:09:52.937208 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.937796 kubelet[2513]: E0307 02:09:52.937758 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.937796 kubelet[2513]: W0307 02:09:52.937792 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.938015 kubelet[2513]: E0307 02:09:52.937807 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.939121 kubelet[2513]: E0307 02:09:52.939106 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.939121 kubelet[2513]: W0307 02:09:52.939118 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.939228 kubelet[2513]: E0307 02:09:52.939130 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.940070 kubelet[2513]: E0307 02:09:52.939942 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.940070 kubelet[2513]: W0307 02:09:52.939977 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.940070 kubelet[2513]: E0307 02:09:52.939992 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.947483 kubelet[2513]: E0307 02:09:52.947143 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.947483 kubelet[2513]: W0307 02:09:52.947236 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.947483 kubelet[2513]: E0307 02:09:52.947259 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.948702 kubelet[2513]: E0307 02:09:52.948660 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.948702 kubelet[2513]: W0307 02:09:52.948694 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.948964 kubelet[2513]: E0307 02:09:52.948711 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.953274 kubelet[2513]: E0307 02:09:52.952147 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.953274 kubelet[2513]: W0307 02:09:52.952168 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.953274 kubelet[2513]: E0307 02:09:52.952187 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.955522 kubelet[2513]: E0307 02:09:52.955328 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.955522 kubelet[2513]: W0307 02:09:52.955365 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.955522 kubelet[2513]: E0307 02:09:52.955386 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.956082 kubelet[2513]: E0307 02:09:52.955996 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.956082 kubelet[2513]: W0307 02:09:52.956040 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.956082 kubelet[2513]: E0307 02:09:52.956056 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.956606 kubelet[2513]: E0307 02:09:52.956537 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.956606 kubelet[2513]: W0307 02:09:52.956570 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.956939 kubelet[2513]: E0307 02:09:52.956624 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.957703 kubelet[2513]: E0307 02:09:52.957561 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.957703 kubelet[2513]: W0307 02:09:52.957629 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.957703 kubelet[2513]: E0307 02:09:52.957645 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.958275 kubelet[2513]: E0307 02:09:52.958246 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.958275 kubelet[2513]: W0307 02:09:52.958259 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.958275 kubelet[2513]: E0307 02:09:52.958272 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.959386 kubelet[2513]: E0307 02:09:52.959242 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.959386 kubelet[2513]: W0307 02:09:52.959256 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.959386 kubelet[2513]: E0307 02:09:52.959271 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.959733 kubelet[2513]: E0307 02:09:52.959675 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.959733 kubelet[2513]: W0307 02:09:52.959706 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.959733 kubelet[2513]: E0307 02:09:52.959721 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.960342 kubelet[2513]: E0307 02:09:52.960283 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.960342 kubelet[2513]: W0307 02:09:52.960316 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.960342 kubelet[2513]: E0307 02:09:52.960330 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.961234 kubelet[2513]: E0307 02:09:52.961009 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.961234 kubelet[2513]: W0307 02:09:52.961046 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.961234 kubelet[2513]: E0307 02:09:52.961060 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.962538 kubelet[2513]: E0307 02:09:52.962193 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.962538 kubelet[2513]: W0307 02:09:52.962227 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.962538 kubelet[2513]: E0307 02:09:52.962244 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.965206 kubelet[2513]: E0307 02:09:52.965105 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.965206 kubelet[2513]: W0307 02:09:52.965125 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.965206 kubelet[2513]: E0307 02:09:52.965145 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.966185 kubelet[2513]: E0307 02:09:52.965753 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.966185 kubelet[2513]: W0307 02:09:52.965792 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.966185 kubelet[2513]: E0307 02:09:52.965807 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.966390 kubelet[2513]: E0307 02:09:52.966225 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.966390 kubelet[2513]: W0307 02:09:52.966328 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.966390 kubelet[2513]: E0307 02:09:52.966341 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.967101 kubelet[2513]: E0307 02:09:52.967057 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.967101 kubelet[2513]: W0307 02:09:52.967093 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.967200 kubelet[2513]: E0307 02:09:52.967108 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:52.967811 kubelet[2513]: E0307 02:09:52.967773 2513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:09:52.967811 kubelet[2513]: W0307 02:09:52.967804 2513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:09:52.967955 kubelet[2513]: E0307 02:09:52.967905 2513 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:09:53.356771 containerd[1463]: time="2026-03-07T02:09:53.355930279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:53.358182 containerd[1463]: time="2026-03-07T02:09:53.358119083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 02:09:53.360549 containerd[1463]: time="2026-03-07T02:09:53.360447998Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:53.365968 containerd[1463]: time="2026-03-07T02:09:53.365795783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:09:53.367380 containerd[1463]: time="2026-03-07T02:09:53.367065028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.674506927s" Mar 7 02:09:53.367380 containerd[1463]: time="2026-03-07T02:09:53.367130159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 02:09:53.377719 containerd[1463]: time="2026-03-07T02:09:53.377657441Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 02:09:53.414664 containerd[1463]: time="2026-03-07T02:09:53.413614695Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152\"" Mar 7 02:09:53.415562 containerd[1463]: time="2026-03-07T02:09:53.415450610Z" level=info msg="StartContainer for \"d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152\"" Mar 7 02:09:53.474742 systemd[1]: Started cri-containerd-d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152.scope - libcontainer container d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152. Mar 7 02:09:53.540691 containerd[1463]: time="2026-03-07T02:09:53.540441417Z" level=info msg="StartContainer for \"d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152\" returns successfully" Mar 7 02:09:53.559260 systemd[1]: cri-containerd-d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152.scope: Deactivated successfully. Mar 7 02:09:53.702291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152-rootfs.mount: Deactivated successfully. Mar 7 02:09:53.738682 kubelet[2513]: E0307 02:09:53.738553 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:09:53.747381 containerd[1463]: time="2026-03-07T02:09:53.747095679Z" level=info msg="shim disconnected" id=d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152 namespace=k8s.io Mar 7 02:09:53.747381 containerd[1463]: time="2026-03-07T02:09:53.747160250Z" level=warning msg="cleaning up after shim disconnected" id=d26c7e29b1865417dbfe3f2268bb7a77a2711a27fa0294993a661b9f8abd5152 namespace=k8s.io Mar 7 02:09:53.747381 containerd[1463]: time="2026-03-07T02:09:53.747173264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:09:53.839613 kubelet[2513]: E0307 02:09:53.839366 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:53.845490 containerd[1463]: time="2026-03-07T02:09:53.845428003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 02:09:54.258162 update_engine[1456]: I20260307 02:09:54.257710 1456 update_attempter.cc:509] Updating boot flags... Mar 7 02:09:54.311936 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3302) Mar 7 02:09:54.843277 kubelet[2513]: E0307 02:09:54.842756 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:09:55.738675 kubelet[2513]: E0307 02:09:55.738500 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:09:57.740558 kubelet[2513]: E0307 02:09:57.740442 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:09:59.740973 kubelet[2513]: E0307 02:09:59.739946 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:10:00.991377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037766944.mount: Deactivated successfully. Mar 7 02:10:01.046287 containerd[1463]: time="2026-03-07T02:10:01.046178950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:01.048911 containerd[1463]: time="2026-03-07T02:10:01.048805618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 02:10:01.063502 containerd[1463]: time="2026-03-07T02:10:01.062942999Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:01.075974 containerd[1463]: time="2026-03-07T02:10:01.075799432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.230313693s" Mar 7 02:10:01.075974 containerd[1463]: time="2026-03-07T02:10:01.075919816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 02:10:01.082967 containerd[1463]: time="2026-03-07T02:10:01.082792563Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 02:10:01.147366 containerd[1463]: time="2026-03-07T02:10:01.147155123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:01.155669 containerd[1463]: time="2026-03-07T02:10:01.155311371Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e\"" Mar 7 02:10:01.156912 containerd[1463]: time="2026-03-07T02:10:01.156775172Z" level=info msg="StartContainer for \"dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e\"" Mar 7 02:10:01.244190 systemd[1]: Started cri-containerd-dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e.scope - libcontainer container dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e. Mar 7 02:10:01.355085 containerd[1463]: time="2026-03-07T02:10:01.354980530Z" level=info msg="StartContainer for \"dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e\" returns successfully" Mar 7 02:10:01.368078 systemd[1]: cri-containerd-dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e.scope: Deactivated successfully. Mar 7 02:10:01.514191 containerd[1463]: time="2026-03-07T02:10:01.514028321Z" level=info msg="shim disconnected" id=dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e namespace=k8s.io Mar 7 02:10:01.514689 containerd[1463]: time="2026-03-07T02:10:01.514520048Z" level=warning msg="cleaning up after shim disconnected" id=dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e namespace=k8s.io Mar 7 02:10:01.514689 containerd[1463]: time="2026-03-07T02:10:01.514599796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:10:01.738247 kubelet[2513]: E0307 02:10:01.738080 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:10:01.875604 containerd[1463]: time="2026-03-07T02:10:01.875200216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 02:10:01.990736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dec672a0a48233a27eaf06b91a08afb6aaba7a7b4342a719f353d0d27fe4069e-rootfs.mount: Deactivated successfully. Mar 7 02:10:03.738436 kubelet[2513]: E0307 02:10:03.738355 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:10:04.067413 containerd[1463]: time="2026-03-07T02:10:04.067162493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:04.068650 containerd[1463]: time="2026-03-07T02:10:04.068518982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 02:10:04.071285 containerd[1463]: time="2026-03-07T02:10:04.071149644Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:04.074713 containerd[1463]: time="2026-03-07T02:10:04.074621008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:04.076161 containerd[1463]: time="2026-03-07T02:10:04.076110755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.200869192s" Mar 7 02:10:04.076231 containerd[1463]: time="2026-03-07T02:10:04.076159837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 02:10:04.082202 containerd[1463]: time="2026-03-07T02:10:04.082164313Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 02:10:04.106123 containerd[1463]: time="2026-03-07T02:10:04.106052425Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f\"" Mar 7 02:10:04.107283 containerd[1463]: time="2026-03-07T02:10:04.107152502Z" level=info msg="StartContainer for \"177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f\"" Mar 7 02:10:04.148089 systemd[1]: run-containerd-runc-k8s.io-177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f-runc.CMfLH7.mount: Deactivated successfully. Mar 7 02:10:04.160085 systemd[1]: Started cri-containerd-177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f.scope - libcontainer container 177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f. Mar 7 02:10:04.220166 containerd[1463]: time="2026-03-07T02:10:04.218006650Z" level=info msg="StartContainer for \"177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f\" returns successfully" Mar 7 02:10:04.974076 systemd[1]: cri-containerd-177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f.scope: Deactivated successfully. Mar 7 02:10:05.014786 containerd[1463]: time="2026-03-07T02:10:05.014695335Z" level=info msg="shim disconnected" id=177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f namespace=k8s.io Mar 7 02:10:05.014786 containerd[1463]: time="2026-03-07T02:10:05.014779642Z" level=warning msg="cleaning up after shim disconnected" id=177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f namespace=k8s.io Mar 7 02:10:05.014786 containerd[1463]: time="2026-03-07T02:10:05.014795852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:10:05.018551 kubelet[2513]: I0307 02:10:05.018446 2513 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 02:10:05.098172 systemd[1]: Created slice kubepods-besteffort-podc3a031a0_0380_4645_b6c5_339ff7d45b89.slice - libcontainer container kubepods-besteffort-podc3a031a0_0380_4645_b6c5_339ff7d45b89.slice. Mar 7 02:10:05.103402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-177caec38bf29b8187fbda592f32c19635f82255e26ce850a41e940aaa9ea68f-rootfs.mount: Deactivated successfully. Mar 7 02:10:05.116459 systemd[1]: Created slice kubepods-besteffort-podc33ead7d_4e97_44b6_9819_df4b9af4cca6.slice - libcontainer container kubepods-besteffort-podc33ead7d_4e97_44b6_9819_df4b9af4cca6.slice. Mar 7 02:10:05.126903 systemd[1]: Created slice kubepods-besteffort-pod1cc2f24d_5c0d_45e8_8bc7_308383f07cd0.slice - libcontainer container kubepods-besteffort-pod1cc2f24d_5c0d_45e8_8bc7_308383f07cd0.slice. Mar 7 02:10:05.134019 systemd[1]: Created slice kubepods-besteffort-pod84c2f7ea_3510_4273_9bd1_34a319c955aa.slice - libcontainer container kubepods-besteffort-pod84c2f7ea_3510_4273_9bd1_34a319c955aa.slice. Mar 7 02:10:05.145714 systemd[1]: Created slice kubepods-burstable-pod46d14d7c_35d0_47e0_aa91_219c755e8d1d.slice - libcontainer container kubepods-burstable-pod46d14d7c_35d0_47e0_aa91_219c755e8d1d.slice. Mar 7 02:10:05.157040 systemd[1]: Created slice kubepods-burstable-pod86a63514_f924_4601_8280_ce64e2c76969.slice - libcontainer container kubepods-burstable-pod86a63514_f924_4601_8280_ce64e2c76969.slice. Mar 7 02:10:05.165180 systemd[1]: Created slice kubepods-besteffort-pod64ba9c77_5311_407e_91f4_43c5b17b302e.slice - libcontainer container kubepods-besteffort-pod64ba9c77_5311_407e_91f4_43c5b17b302e.slice. Mar 7 02:10:05.168672 kubelet[2513]: I0307 02:10:05.168553 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-backend-key-pair\") pod \"whisker-75c74c5c-746nx\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " pod="calico-system/whisker-75c74c5c-746nx" Mar 7 02:10:05.168672 kubelet[2513]: I0307 02:10:05.168651 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84c2f7ea-3510-4273-9bd1-34a319c955aa-calico-apiserver-certs\") pod \"calico-apiserver-dff7f645b-qn22h\" (UID: \"84c2f7ea-3510-4273-9bd1-34a319c955aa\") " pod="calico-system/calico-apiserver-dff7f645b-qn22h" Mar 7 02:10:05.168878 kubelet[2513]: I0307 02:10:05.168681 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86a63514-f924-4601-8280-ce64e2c76969-config-volume\") pod \"coredns-66bc5c9577-pcjrv\" (UID: \"86a63514-f924-4601-8280-ce64e2c76969\") " pod="kube-system/coredns-66bc5c9577-pcjrv" Mar 7 02:10:05.168878 kubelet[2513]: I0307 02:10:05.168749 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a031a0-0380-4645-b6c5-339ff7d45b89-config\") pod \"goldmane-cccfbd5cf-wcmzt\" (UID: \"c3a031a0-0380-4645-b6c5-339ff7d45b89\") " pod="calico-system/goldmane-cccfbd5cf-wcmzt" Mar 7 02:10:05.168878 kubelet[2513]: I0307 02:10:05.168788 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c3a031a0-0380-4645-b6c5-339ff7d45b89-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-wcmzt\" (UID: \"c3a031a0-0380-4645-b6c5-339ff7d45b89\") " pod="calico-system/goldmane-cccfbd5cf-wcmzt" Mar 7 02:10:05.168977 kubelet[2513]: I0307 02:10:05.168891 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9jqn\" (UniqueName: \"kubernetes.io/projected/64ba9c77-5311-407e-91f4-43c5b17b302e-kube-api-access-c9jqn\") pod \"calico-apiserver-dff7f645b-tvtx5\" (UID: \"64ba9c77-5311-407e-91f4-43c5b17b302e\") " pod="calico-system/calico-apiserver-dff7f645b-tvtx5" Mar 7 02:10:05.168977 kubelet[2513]: I0307 02:10:05.168926 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfqk\" (UniqueName: \"kubernetes.io/projected/c3a031a0-0380-4645-b6c5-339ff7d45b89-kube-api-access-gzfqk\") pod \"goldmane-cccfbd5cf-wcmzt\" (UID: \"c3a031a0-0380-4645-b6c5-339ff7d45b89\") " pod="calico-system/goldmane-cccfbd5cf-wcmzt" Mar 7 02:10:05.168977 kubelet[2513]: I0307 02:10:05.168954 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64ba9c77-5311-407e-91f4-43c5b17b302e-calico-apiserver-certs\") pod \"calico-apiserver-dff7f645b-tvtx5\" (UID: \"64ba9c77-5311-407e-91f4-43c5b17b302e\") " pod="calico-system/calico-apiserver-dff7f645b-tvtx5" Mar 7 02:10:05.169047 kubelet[2513]: I0307 02:10:05.169001 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6b22\" (UniqueName: \"kubernetes.io/projected/1cc2f24d-5c0d-45e8-8bc7-308383f07cd0-kube-api-access-x6b22\") pod \"calico-kube-controllers-59487fcc84-nrqln\" (UID: \"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0\") " pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" Mar 7 02:10:05.169047 kubelet[2513]: I0307 02:10:05.169028 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ht87\" (UniqueName: \"kubernetes.io/projected/c33ead7d-4e97-44b6-9819-df4b9af4cca6-kube-api-access-9ht87\") pod \"whisker-75c74c5c-746nx\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " pod="calico-system/whisker-75c74c5c-746nx" Mar 7 02:10:05.169086 kubelet[2513]: I0307 02:10:05.169056 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cc2f24d-5c0d-45e8-8bc7-308383f07cd0-tigera-ca-bundle\") pod \"calico-kube-controllers-59487fcc84-nrqln\" (UID: \"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0\") " pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" Mar 7 02:10:05.169112 kubelet[2513]: I0307 02:10:05.169084 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-ca-bundle\") pod \"whisker-75c74c5c-746nx\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " pod="calico-system/whisker-75c74c5c-746nx" Mar 7 02:10:05.169203 kubelet[2513]: I0307 02:10:05.169146 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h94gt\" (UniqueName: \"kubernetes.io/projected/84c2f7ea-3510-4273-9bd1-34a319c955aa-kube-api-access-h94gt\") pod \"calico-apiserver-dff7f645b-qn22h\" (UID: \"84c2f7ea-3510-4273-9bd1-34a319c955aa\") " pod="calico-system/calico-apiserver-dff7f645b-qn22h" Mar 7 02:10:05.169308 kubelet[2513]: I0307 02:10:05.169270 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwhc4\" (UniqueName: \"kubernetes.io/projected/86a63514-f924-4601-8280-ce64e2c76969-kube-api-access-zwhc4\") pod \"coredns-66bc5c9577-pcjrv\" (UID: \"86a63514-f924-4601-8280-ce64e2c76969\") " pod="kube-system/coredns-66bc5c9577-pcjrv" Mar 7 02:10:05.169363 kubelet[2513]: I0307 02:10:05.169318 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-nginx-config\") pod \"whisker-75c74c5c-746nx\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " pod="calico-system/whisker-75c74c5c-746nx" Mar 7 02:10:05.169363 kubelet[2513]: I0307 02:10:05.169343 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d14d7c-35d0-47e0-aa91-219c755e8d1d-config-volume\") pod \"coredns-66bc5c9577-7cwdz\" (UID: \"46d14d7c-35d0-47e0-aa91-219c755e8d1d\") " pod="kube-system/coredns-66bc5c9577-7cwdz" Mar 7 02:10:05.169446 kubelet[2513]: I0307 02:10:05.169367 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn7c\" (UniqueName: \"kubernetes.io/projected/46d14d7c-35d0-47e0-aa91-219c755e8d1d-kube-api-access-2fn7c\") pod \"coredns-66bc5c9577-7cwdz\" (UID: \"46d14d7c-35d0-47e0-aa91-219c755e8d1d\") " pod="kube-system/coredns-66bc5c9577-7cwdz" Mar 7 02:10:05.169446 kubelet[2513]: I0307 02:10:05.169413 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3a031a0-0380-4645-b6c5-339ff7d45b89-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-wcmzt\" (UID: \"c3a031a0-0380-4645-b6c5-339ff7d45b89\") " pod="calico-system/goldmane-cccfbd5cf-wcmzt" Mar 7 02:10:05.415932 containerd[1463]: time="2026-03-07T02:10:05.415767353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wcmzt,Uid:c3a031a0-0380-4645-b6c5-339ff7d45b89,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:05.425098 containerd[1463]: time="2026-03-07T02:10:05.424764745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c74c5c-746nx,Uid:c33ead7d-4e97-44b6-9819-df4b9af4cca6,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:05.436301 containerd[1463]: time="2026-03-07T02:10:05.436218808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59487fcc84-nrqln,Uid:1cc2f24d-5c0d-45e8-8bc7-308383f07cd0,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:05.444345 containerd[1463]: time="2026-03-07T02:10:05.444253638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-qn22h,Uid:84c2f7ea-3510-4273-9bd1-34a319c955aa,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:05.466452 kubelet[2513]: E0307 02:10:05.466256 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:05.467655 containerd[1463]: time="2026-03-07T02:10:05.467172377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7cwdz,Uid:46d14d7c-35d0-47e0-aa91-219c755e8d1d,Namespace:kube-system,Attempt:0,}" Mar 7 02:10:05.470487 kubelet[2513]: E0307 02:10:05.470377 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:05.471524 containerd[1463]: time="2026-03-07T02:10:05.471333936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pcjrv,Uid:86a63514-f924-4601-8280-ce64e2c76969,Namespace:kube-system,Attempt:0,}" Mar 7 02:10:05.475521 containerd[1463]: time="2026-03-07T02:10:05.475403743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-tvtx5,Uid:64ba9c77-5311-407e-91f4-43c5b17b302e,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:05.686906 containerd[1463]: time="2026-03-07T02:10:05.686194749Z" level=error msg="Failed to destroy network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.687081 containerd[1463]: time="2026-03-07T02:10:05.686929759Z" level=error msg="encountered an error cleaning up failed sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.687081 containerd[1463]: time="2026-03-07T02:10:05.687011853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wcmzt,Uid:c3a031a0-0380-4645-b6c5-339ff7d45b89,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.703610 containerd[1463]: time="2026-03-07T02:10:05.703393573Z" level=error msg="Failed to destroy network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.704448 containerd[1463]: time="2026-03-07T02:10:05.704277651Z" level=error msg="encountered an error cleaning up failed sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.704448 containerd[1463]: time="2026-03-07T02:10:05.704413774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59487fcc84-nrqln,Uid:1cc2f24d-5c0d-45e8-8bc7-308383f07cd0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.705051 kubelet[2513]: E0307 02:10:05.704931 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.705051 kubelet[2513]: E0307 02:10:05.705034 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-wcmzt" Mar 7 02:10:05.705329 kubelet[2513]: E0307 02:10:05.705062 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-wcmzt" Mar 7 02:10:05.705329 kubelet[2513]: E0307 02:10:05.705121 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-wcmzt_calico-system(c3a031a0-0380-4645-b6c5-339ff7d45b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-wcmzt_calico-system(c3a031a0-0380-4645-b6c5-339ff7d45b89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-wcmzt" podUID="c3a031a0-0380-4645-b6c5-339ff7d45b89" Mar 7 02:10:05.705932 kubelet[2513]: E0307 02:10:05.705479 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.705932 kubelet[2513]: E0307 02:10:05.705513 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" Mar 7 02:10:05.705932 kubelet[2513]: E0307 02:10:05.705533 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" Mar 7 02:10:05.706320 kubelet[2513]: E0307 02:10:05.705774 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59487fcc84-nrqln_calico-system(1cc2f24d-5c0d-45e8-8bc7-308383f07cd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59487fcc84-nrqln_calico-system(1cc2f24d-5c0d-45e8-8bc7-308383f07cd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" podUID="1cc2f24d-5c0d-45e8-8bc7-308383f07cd0" Mar 7 02:10:05.734532 containerd[1463]: time="2026-03-07T02:10:05.734315801Z" level=error msg="Failed to destroy network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.735191 containerd[1463]: time="2026-03-07T02:10:05.735155457Z" level=error msg="encountered an error cleaning up failed sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.735452 containerd[1463]: time="2026-03-07T02:10:05.735392358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c74c5c-746nx,Uid:c33ead7d-4e97-44b6-9819-df4b9af4cca6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.736173 kubelet[2513]: E0307 02:10:05.735794 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.736173 kubelet[2513]: E0307 02:10:05.735931 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c74c5c-746nx" Mar 7 02:10:05.736173 kubelet[2513]: E0307 02:10:05.735960 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c74c5c-746nx" Mar 7 02:10:05.736385 kubelet[2513]: E0307 02:10:05.736035 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75c74c5c-746nx_calico-system(c33ead7d-4e97-44b6-9819-df4b9af4cca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75c74c5c-746nx_calico-system(c33ead7d-4e97-44b6-9819-df4b9af4cca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75c74c5c-746nx" podUID="c33ead7d-4e97-44b6-9819-df4b9af4cca6" Mar 7 02:10:05.751302 systemd[1]: Created slice kubepods-besteffort-pod635ed81a_2a01_425c_84a2_97d0d48a5575.slice - libcontainer container kubepods-besteffort-pod635ed81a_2a01_425c_84a2_97d0d48a5575.slice. Mar 7 02:10:05.755148 containerd[1463]: time="2026-03-07T02:10:05.755054061Z" level=error msg="Failed to destroy network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.756111 containerd[1463]: time="2026-03-07T02:10:05.755976412Z" level=error msg="encountered an error cleaning up failed sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.756170 containerd[1463]: time="2026-03-07T02:10:05.756093381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-tvtx5,Uid:64ba9c77-5311-407e-91f4-43c5b17b302e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.756502 kubelet[2513]: E0307 02:10:05.756344 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.756696 kubelet[2513]: E0307 02:10:05.756506 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff7f645b-tvtx5" Mar 7 02:10:05.756696 kubelet[2513]: E0307 02:10:05.756534 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff7f645b-tvtx5" Mar 7 02:10:05.756696 kubelet[2513]: E0307 02:10:05.756645 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dff7f645b-tvtx5_calico-system(64ba9c77-5311-407e-91f4-43c5b17b302e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dff7f645b-tvtx5_calico-system(64ba9c77-5311-407e-91f4-43c5b17b302e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff7f645b-tvtx5" podUID="64ba9c77-5311-407e-91f4-43c5b17b302e" Mar 7 02:10:05.762362 containerd[1463]: time="2026-03-07T02:10:05.762271503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8n85,Uid:635ed81a-2a01-425c-84a2-97d0d48a5575,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:05.777747 containerd[1463]: time="2026-03-07T02:10:05.777681445Z" level=error msg="Failed to destroy network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.778497 containerd[1463]: time="2026-03-07T02:10:05.778407059Z" level=error msg="encountered an error cleaning up failed sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.778497 containerd[1463]: time="2026-03-07T02:10:05.778456360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pcjrv,Uid:86a63514-f924-4601-8280-ce64e2c76969,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.778985 kubelet[2513]: E0307 02:10:05.778897 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.778985 kubelet[2513]: E0307 02:10:05.778968 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pcjrv" Mar 7 02:10:05.778985 kubelet[2513]: E0307 02:10:05.778989 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pcjrv" Mar 7 02:10:05.779210 kubelet[2513]: E0307 02:10:05.779038 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pcjrv_kube-system(86a63514-f924-4601-8280-ce64e2c76969)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pcjrv_kube-system(86a63514-f924-4601-8280-ce64e2c76969)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pcjrv" podUID="86a63514-f924-4601-8280-ce64e2c76969" Mar 7 02:10:05.780495 containerd[1463]: time="2026-03-07T02:10:05.780432925Z" level=error msg="Failed to destroy network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.782134 containerd[1463]: time="2026-03-07T02:10:05.782092319Z" level=error msg="encountered an error cleaning up failed sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.782383 containerd[1463]: time="2026-03-07T02:10:05.782311599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-qn22h,Uid:84c2f7ea-3510-4273-9bd1-34a319c955aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.783007 kubelet[2513]: E0307 02:10:05.782921 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.783074 kubelet[2513]: E0307 02:10:05.783004 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff7f645b-qn22h" Mar 7 02:10:05.783074 kubelet[2513]: E0307 02:10:05.783039 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff7f645b-qn22h" Mar 7 02:10:05.783160 kubelet[2513]: E0307 02:10:05.783111 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dff7f645b-qn22h_calico-system(84c2f7ea-3510-4273-9bd1-34a319c955aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dff7f645b-qn22h_calico-system(84c2f7ea-3510-4273-9bd1-34a319c955aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff7f645b-qn22h" podUID="84c2f7ea-3510-4273-9bd1-34a319c955aa" Mar 7 02:10:05.785397 containerd[1463]: time="2026-03-07T02:10:05.784743179Z" level=error msg="Failed to destroy network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.785477 containerd[1463]: time="2026-03-07T02:10:05.785423247Z" level=error msg="encountered an error cleaning up failed sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.786623 containerd[1463]: time="2026-03-07T02:10:05.785488910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7cwdz,Uid:46d14d7c-35d0-47e0-aa91-219c755e8d1d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.786697 kubelet[2513]: E0307 02:10:05.786081 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.786697 kubelet[2513]: E0307 02:10:05.786146 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7cwdz" Mar 7 02:10:05.786697 kubelet[2513]: E0307 02:10:05.786172 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7cwdz" Mar 7 02:10:05.786797 kubelet[2513]: E0307 02:10:05.786247 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7cwdz_kube-system(46d14d7c-35d0-47e0-aa91-219c755e8d1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7cwdz_kube-system(46d14d7c-35d0-47e0-aa91-219c755e8d1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7cwdz" podUID="46d14d7c-35d0-47e0-aa91-219c755e8d1d" Mar 7 02:10:05.863610 containerd[1463]: time="2026-03-07T02:10:05.863474359Z" level=error msg="Failed to destroy network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.864200 containerd[1463]: time="2026-03-07T02:10:05.864116977Z" level=error msg="encountered an error cleaning up failed sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.864280 containerd[1463]: time="2026-03-07T02:10:05.864210741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8n85,Uid:635ed81a-2a01-425c-84a2-97d0d48a5575,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.864540 kubelet[2513]: E0307 02:10:05.864491 2513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:05.864640 kubelet[2513]: E0307 02:10:05.864555 2513 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8n85" Mar 7 02:10:05.864640 kubelet[2513]: E0307 02:10:05.864599 2513 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8n85" Mar 7 02:10:05.864711 kubelet[2513]: E0307 02:10:05.864685 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z8n85_calico-system(635ed81a-2a01-425c-84a2-97d0d48a5575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z8n85_calico-system(635ed81a-2a01-425c-84a2-97d0d48a5575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:10:05.891327 kubelet[2513]: I0307 02:10:05.891268 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:05.893307 kubelet[2513]: I0307 02:10:05.893283 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:05.895073 containerd[1463]: time="2026-03-07T02:10:05.894953129Z" level=info msg="StopPodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\"" Mar 7 02:10:05.896755 containerd[1463]: time="2026-03-07T02:10:05.896535108Z" level=info msg="StopPodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\"" Mar 7 02:10:05.902682 kubelet[2513]: I0307 02:10:05.902491 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:05.905364 containerd[1463]: time="2026-03-07T02:10:05.905270318Z" level=info msg="StopPodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\"" Mar 7 02:10:05.905976 containerd[1463]: time="2026-03-07T02:10:05.905723520Z" level=info msg="Ensure that sandbox 8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7 in task-service has been cleanup successfully" Mar 7 02:10:05.906469 containerd[1463]: time="2026-03-07T02:10:05.905725393Z" level=info msg="Ensure that sandbox 5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894 in task-service has been cleanup successfully" Mar 7 02:10:05.906541 kubelet[2513]: I0307 02:10:05.906405 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:05.909076 containerd[1463]: time="2026-03-07T02:10:05.907201939Z" level=info msg="Ensure that sandbox 1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe in task-service has been cleanup successfully" Mar 7 02:10:05.909076 containerd[1463]: time="2026-03-07T02:10:05.908362332Z" level=info msg="StopPodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\"" Mar 7 02:10:05.909076 containerd[1463]: time="2026-03-07T02:10:05.908562917Z" level=info msg="Ensure that sandbox c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa in task-service has been cleanup successfully" Mar 7 02:10:05.911695 containerd[1463]: time="2026-03-07T02:10:05.910404460Z" level=info msg="StopPodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\"" Mar 7 02:10:05.911695 containerd[1463]: time="2026-03-07T02:10:05.910643586Z" level=info msg="Ensure that sandbox 85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377 in task-service has been cleanup successfully" Mar 7 02:10:05.911770 kubelet[2513]: I0307 02:10:05.909788 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:05.922883 kubelet[2513]: I0307 02:10:05.920927 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:05.924002 containerd[1463]: time="2026-03-07T02:10:05.923781381Z" level=info msg="StopPodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\"" Mar 7 02:10:05.927067 containerd[1463]: time="2026-03-07T02:10:05.926738634Z" level=info msg="Ensure that sandbox b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957 in task-service has been cleanup successfully" Mar 7 02:10:05.933784 kubelet[2513]: I0307 02:10:05.933668 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:05.936360 containerd[1463]: time="2026-03-07T02:10:05.935914929Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 02:10:05.936360 containerd[1463]: time="2026-03-07T02:10:05.936277816Z" level=info msg="StopPodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\"" Mar 7 02:10:05.936757 containerd[1463]: time="2026-03-07T02:10:05.936463011Z" level=info msg="Ensure that sandbox 90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0 in task-service has been cleanup successfully" Mar 7 02:10:05.941895 kubelet[2513]: I0307 02:10:05.941527 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:05.945629 containerd[1463]: time="2026-03-07T02:10:05.945457422Z" level=info msg="StopPodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\"" Mar 7 02:10:05.946448 containerd[1463]: time="2026-03-07T02:10:05.946052441Z" level=info msg="Ensure that sandbox 14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b in task-service has been cleanup successfully" Mar 7 02:10:06.025786 containerd[1463]: time="2026-03-07T02:10:06.025474161Z" level=info msg="CreateContainer within sandbox \"497019f2d916496f68f58ddb02cdc5d0fa9675d4495330d7a43412b3fc9613cc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bf15aaf2d119d24b0d86fd5c1044d8fbc6fa783302c20c50aa8fc8906c89c45d\"" Mar 7 02:10:06.031154 containerd[1463]: time="2026-03-07T02:10:06.031108989Z" level=info msg="StartContainer for \"bf15aaf2d119d24b0d86fd5c1044d8fbc6fa783302c20c50aa8fc8906c89c45d\"" Mar 7 02:10:06.056902 containerd[1463]: time="2026-03-07T02:10:06.056549314Z" level=error msg="StopPodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" failed" error="failed to destroy network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.058324 kubelet[2513]: E0307 02:10:06.058052 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:06.058324 kubelet[2513]: E0307 02:10:06.058128 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894"} Mar 7 02:10:06.058324 kubelet[2513]: E0307 02:10:06.058195 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84c2f7ea-3510-4273-9bd1-34a319c955aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.058324 kubelet[2513]: E0307 02:10:06.058280 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84c2f7ea-3510-4273-9bd1-34a319c955aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff7f645b-qn22h" podUID="84c2f7ea-3510-4273-9bd1-34a319c955aa" Mar 7 02:10:06.078256 containerd[1463]: time="2026-03-07T02:10:06.078157084Z" level=error msg="StopPodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" failed" error="failed to destroy network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.078550 kubelet[2513]: E0307 02:10:06.078454 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:06.078550 kubelet[2513]: E0307 02:10:06.078504 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957"} Mar 7 02:10:06.078761 kubelet[2513]: E0307 02:10:06.078548 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3a031a0-0380-4645-b6c5-339ff7d45b89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.078761 kubelet[2513]: E0307 02:10:06.078627 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3a031a0-0380-4645-b6c5-339ff7d45b89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-wcmzt" podUID="c3a031a0-0380-4645-b6c5-339ff7d45b89" Mar 7 02:10:06.083181 containerd[1463]: time="2026-03-07T02:10:06.083133173Z" level=error msg="StopPodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" failed" error="failed to destroy network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.083701 kubelet[2513]: E0307 02:10:06.083620 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:06.083786 kubelet[2513]: E0307 02:10:06.083700 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377"} Mar 7 02:10:06.083786 kubelet[2513]: E0307 02:10:06.083750 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.084032 kubelet[2513]: E0307 02:10:06.083782 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75c74c5c-746nx" podUID="c33ead7d-4e97-44b6-9819-df4b9af4cca6" Mar 7 02:10:06.090805 containerd[1463]: time="2026-03-07T02:10:06.090748986Z" level=error msg="StopPodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" failed" error="failed to destroy network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.091063 kubelet[2513]: E0307 02:10:06.091031 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:06.091128 kubelet[2513]: E0307 02:10:06.091081 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe"} Mar 7 02:10:06.091128 kubelet[2513]: E0307 02:10:06.091117 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"635ed81a-2a01-425c-84a2-97d0d48a5575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.091292 kubelet[2513]: E0307 02:10:06.091151 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"635ed81a-2a01-425c-84a2-97d0d48a5575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8n85" podUID="635ed81a-2a01-425c-84a2-97d0d48a5575" Mar 7 02:10:06.091709 containerd[1463]: time="2026-03-07T02:10:06.091453466Z" level=error msg="StopPodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" failed" error="failed to destroy network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.091957 kubelet[2513]: E0307 02:10:06.091880 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:06.091957 kubelet[2513]: E0307 02:10:06.091943 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa"} Mar 7 02:10:06.092091 kubelet[2513]: E0307 02:10:06.091979 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64ba9c77-5311-407e-91f4-43c5b17b302e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.092091 kubelet[2513]: E0307 02:10:06.092011 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64ba9c77-5311-407e-91f4-43c5b17b302e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff7f645b-tvtx5" podUID="64ba9c77-5311-407e-91f4-43c5b17b302e" Mar 7 02:10:06.095535 containerd[1463]: time="2026-03-07T02:10:06.095406066Z" level=error msg="StopPodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" failed" error="failed to destroy network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.095846 kubelet[2513]: E0307 02:10:06.095681 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:06.095846 kubelet[2513]: E0307 02:10:06.095751 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b"} Mar 7 02:10:06.095846 kubelet[2513]: E0307 02:10:06.095782 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.096051 kubelet[2513]: E0307 02:10:06.095909 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" podUID="1cc2f24d-5c0d-45e8-8bc7-308383f07cd0" Mar 7 02:10:06.099957 containerd[1463]: time="2026-03-07T02:10:06.099484802Z" level=error msg="StopPodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" failed" error="failed to destroy network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.100049 kubelet[2513]: E0307 02:10:06.099727 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:06.100049 kubelet[2513]: E0307 02:10:06.099765 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7"} Mar 7 02:10:06.100049 kubelet[2513]: E0307 02:10:06.099799 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46d14d7c-35d0-47e0-aa91-219c755e8d1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.100049 kubelet[2513]: E0307 02:10:06.099894 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46d14d7c-35d0-47e0-aa91-219c755e8d1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7cwdz" podUID="46d14d7c-35d0-47e0-aa91-219c755e8d1d" Mar 7 02:10:06.103750 containerd[1463]: time="2026-03-07T02:10:06.103684786Z" level=error msg="StopPodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" failed" error="failed to destroy network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:10:06.104103 kubelet[2513]: E0307 02:10:06.104062 2513 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:06.104173 kubelet[2513]: E0307 02:10:06.104114 2513 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0"} Mar 7 02:10:06.104173 kubelet[2513]: E0307 02:10:06.104154 2513 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86a63514-f924-4601-8280-ce64e2c76969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 02:10:06.104317 kubelet[2513]: E0307 02:10:06.104195 2513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86a63514-f924-4601-8280-ce64e2c76969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pcjrv" podUID="86a63514-f924-4601-8280-ce64e2c76969" Mar 7 02:10:06.108266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957-shm.mount: Deactivated successfully. Mar 7 02:10:06.136271 systemd[1]: Started cri-containerd-bf15aaf2d119d24b0d86fd5c1044d8fbc6fa783302c20c50aa8fc8906c89c45d.scope - libcontainer container bf15aaf2d119d24b0d86fd5c1044d8fbc6fa783302c20c50aa8fc8906c89c45d. Mar 7 02:10:06.197530 containerd[1463]: time="2026-03-07T02:10:06.196719375Z" level=info msg="StartContainer for \"bf15aaf2d119d24b0d86fd5c1044d8fbc6fa783302c20c50aa8fc8906c89c45d\" returns successfully" Mar 7 02:10:06.947879 containerd[1463]: time="2026-03-07T02:10:06.947457853Z" level=info msg="StopPodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\"" Mar 7 02:10:06.980981 kubelet[2513]: I0307 02:10:06.979084 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2ng8d" podStartSLOduration=3.714640882 podStartE2EDuration="17.979065246s" podCreationTimestamp="2026-03-07 02:09:49 +0000 UTC" firstStartedPulling="2026-03-07 02:09:49.812974303 +0000 UTC m=+16.188489429" lastFinishedPulling="2026-03-07 02:10:04.077398667 +0000 UTC m=+30.452913793" observedRunningTime="2026-03-07 02:10:06.966370696 +0000 UTC m=+33.341885822" watchObservedRunningTime="2026-03-07 02:10:06.979065246 +0000 UTC m=+33.354580372" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.016 [INFO][3931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.017 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" iface="eth0" netns="/var/run/netns/cni-8a7b7172-432d-5c65-4c5b-83f05515ddbe" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.017 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" iface="eth0" netns="/var/run/netns/cni-8a7b7172-432d-5c65-4c5b-83f05515ddbe" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.018 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" iface="eth0" netns="/var/run/netns/cni-8a7b7172-432d-5c65-4c5b-83f05515ddbe" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.018 [INFO][3931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.018 [INFO][3931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.052 [INFO][3939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.053 [INFO][3939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.053 [INFO][3939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.061 [WARNING][3939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.061 [INFO][3939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.063 [INFO][3939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:07.073072 containerd[1463]: 2026-03-07 02:10:07.069 [INFO][3931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:07.073778 containerd[1463]: time="2026-03-07T02:10:07.073342737Z" level=info msg="TearDown network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" successfully" Mar 7 02:10:07.073778 containerd[1463]: time="2026-03-07T02:10:07.073375979Z" level=info msg="StopPodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" returns successfully" Mar 7 02:10:07.077407 systemd[1]: run-netns-cni\x2d8a7b7172\x2d432d\x2d5c65\x2d4c5b\x2d83f05515ddbe.mount: Deactivated successfully. Mar 7 02:10:07.192423 kubelet[2513]: I0307 02:10:07.192337 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-nginx-config\") pod \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " Mar 7 02:10:07.192973 kubelet[2513]: I0307 02:10:07.192436 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-backend-key-pair\") pod \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " Mar 7 02:10:07.192973 kubelet[2513]: I0307 02:10:07.192467 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-ca-bundle\") pod \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " Mar 7 02:10:07.192973 kubelet[2513]: I0307 02:10:07.192506 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ht87\" (UniqueName: \"kubernetes.io/projected/c33ead7d-4e97-44b6-9819-df4b9af4cca6-kube-api-access-9ht87\") pod \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\" (UID: \"c33ead7d-4e97-44b6-9819-df4b9af4cca6\") " Mar 7 02:10:07.193273 kubelet[2513]: I0307 02:10:07.193193 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c33ead7d-4e97-44b6-9819-df4b9af4cca6" (UID: "c33ead7d-4e97-44b6-9819-df4b9af4cca6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:10:07.193742 kubelet[2513]: I0307 02:10:07.193663 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "c33ead7d-4e97-44b6-9819-df4b9af4cca6" (UID: "c33ead7d-4e97-44b6-9819-df4b9af4cca6"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:10:07.197931 kubelet[2513]: I0307 02:10:07.197777 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33ead7d-4e97-44b6-9819-df4b9af4cca6-kube-api-access-9ht87" (OuterVolumeSpecName: "kube-api-access-9ht87") pod "c33ead7d-4e97-44b6-9819-df4b9af4cca6" (UID: "c33ead7d-4e97-44b6-9819-df4b9af4cca6"). InnerVolumeSpecName "kube-api-access-9ht87". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:10:07.198703 kubelet[2513]: I0307 02:10:07.198197 2513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c33ead7d-4e97-44b6-9819-df4b9af4cca6" (UID: "c33ead7d-4e97-44b6-9819-df4b9af4cca6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 02:10:07.201624 systemd[1]: var-lib-kubelet-pods-c33ead7d\x2d4e97\x2d44b6\x2d9819\x2ddf4b9af4cca6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ht87.mount: Deactivated successfully. Mar 7 02:10:07.201800 systemd[1]: var-lib-kubelet-pods-c33ead7d\x2d4e97\x2d44b6\x2d9819\x2ddf4b9af4cca6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 02:10:07.293995 kubelet[2513]: I0307 02:10:07.293913 2513 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 7 02:10:07.293995 kubelet[2513]: I0307 02:10:07.293979 2513 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 7 02:10:07.293995 kubelet[2513]: I0307 02:10:07.293998 2513 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c33ead7d-4e97-44b6-9819-df4b9af4cca6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 7 02:10:07.294245 kubelet[2513]: I0307 02:10:07.294012 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ht87\" (UniqueName: \"kubernetes.io/projected/c33ead7d-4e97-44b6-9819-df4b9af4cca6-kube-api-access-9ht87\") on node \"localhost\" DevicePath \"\"" Mar 7 02:10:07.750372 systemd[1]: Removed slice kubepods-besteffort-podc33ead7d_4e97_44b6_9819_df4b9af4cca6.slice - libcontainer container kubepods-besteffort-podc33ead7d_4e97_44b6_9819_df4b9af4cca6.slice. Mar 7 02:10:07.954987 kubelet[2513]: I0307 02:10:07.954924 2513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:10:08.059992 systemd[1]: Created slice kubepods-besteffort-pod70aef40e_b731_4510_9001_ae12d8629f70.slice - libcontainer container kubepods-besteffort-pod70aef40e_b731_4510_9001_ae12d8629f70.slice. Mar 7 02:10:08.101537 kubelet[2513]: I0307 02:10:08.101453 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/70aef40e-b731-4510-9001-ae12d8629f70-nginx-config\") pod \"whisker-5b6d6f5498-fcjbq\" (UID: \"70aef40e-b731-4510-9001-ae12d8629f70\") " pod="calico-system/whisker-5b6d6f5498-fcjbq" Mar 7 02:10:08.101736 kubelet[2513]: I0307 02:10:08.101537 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscqp\" (UniqueName: \"kubernetes.io/projected/70aef40e-b731-4510-9001-ae12d8629f70-kube-api-access-gscqp\") pod \"whisker-5b6d6f5498-fcjbq\" (UID: \"70aef40e-b731-4510-9001-ae12d8629f70\") " pod="calico-system/whisker-5b6d6f5498-fcjbq" Mar 7 02:10:08.101736 kubelet[2513]: I0307 02:10:08.101624 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70aef40e-b731-4510-9001-ae12d8629f70-whisker-ca-bundle\") pod \"whisker-5b6d6f5498-fcjbq\" (UID: \"70aef40e-b731-4510-9001-ae12d8629f70\") " pod="calico-system/whisker-5b6d6f5498-fcjbq" Mar 7 02:10:08.101736 kubelet[2513]: I0307 02:10:08.101652 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70aef40e-b731-4510-9001-ae12d8629f70-whisker-backend-key-pair\") pod \"whisker-5b6d6f5498-fcjbq\" (UID: \"70aef40e-b731-4510-9001-ae12d8629f70\") " pod="calico-system/whisker-5b6d6f5498-fcjbq" Mar 7 02:10:08.256910 kernel: calico-node[3993]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 02:10:08.377674 containerd[1463]: time="2026-03-07T02:10:08.377403723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b6d6f5498-fcjbq,Uid:70aef40e-b731-4510-9001-ae12d8629f70,Namespace:calico-system,Attempt:0,}" Mar 7 02:10:08.627930 systemd-networkd[1382]: calid87d8a16bb5: Link UP Mar 7 02:10:08.629980 systemd-networkd[1382]: calid87d8a16bb5: Gained carrier Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.453 [INFO][4091] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0 whisker-5b6d6f5498- calico-system 70aef40e-b731-4510-9001-ae12d8629f70 913 0 2026-03-07 02:10:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b6d6f5498 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b6d6f5498-fcjbq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid87d8a16bb5 [] [] }} ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.453 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.510 [INFO][4105] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" HandleID="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Workload="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.530 [INFO][4105] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" HandleID="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Workload="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efa70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b6d6f5498-fcjbq", "timestamp":"2026-03-07 02:10:08.510522231 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000652f20)} Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.530 [INFO][4105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.530 [INFO][4105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.530 [INFO][4105] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.542 [INFO][4105] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.562 [INFO][4105] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.570 [INFO][4105] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.574 [INFO][4105] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.578 [INFO][4105] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.578 [INFO][4105] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.582 [INFO][4105] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.590 [INFO][4105] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.598 [INFO][4105] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.598 [INFO][4105] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" host="localhost" Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.598 [INFO][4105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:08.656795 containerd[1463]: 2026-03-07 02:10:08.598 [INFO][4105] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" HandleID="k8s-pod-network.5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Workload="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.658109 containerd[1463]: 2026-03-07 02:10:08.606 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0", GenerateName:"whisker-5b6d6f5498-", Namespace:"calico-system", SelfLink:"", UID:"70aef40e-b731-4510-9001-ae12d8629f70", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b6d6f5498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b6d6f5498-fcjbq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid87d8a16bb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:08.658109 containerd[1463]: 2026-03-07 02:10:08.606 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.658109 containerd[1463]: 2026-03-07 02:10:08.606 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid87d8a16bb5 ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.658109 containerd[1463]: 2026-03-07 02:10:08.631 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.658109 containerd[1463]: 2026-03-07 02:10:08.631 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0", GenerateName:"whisker-5b6d6f5498-", Namespace:"calico-system", SelfLink:"", UID:"70aef40e-b731-4510-9001-ae12d8629f70", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b6d6f5498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac", Pod:"whisker-5b6d6f5498-fcjbq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid87d8a16bb5", MAC:"9a:d7:1a:69:6c:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:08.658109 containerd[1463]: 2026-03-07 02:10:08.647 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac" Namespace="calico-system" Pod="whisker-5b6d6f5498-fcjbq" WorkloadEndpoint="localhost-k8s-whisker--5b6d6f5498--fcjbq-eth0" Mar 7 02:10:08.691489 containerd[1463]: time="2026-03-07T02:10:08.690875053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:08.691489 containerd[1463]: time="2026-03-07T02:10:08.691110764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:08.691489 containerd[1463]: time="2026-03-07T02:10:08.691282283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:08.693002 containerd[1463]: time="2026-03-07T02:10:08.692612404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:08.732084 systemd[1]: Started cri-containerd-5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac.scope - libcontainer container 5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac. Mar 7 02:10:08.753103 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:08.793642 containerd[1463]: time="2026-03-07T02:10:08.793444934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b6d6f5498-fcjbq,Uid:70aef40e-b731-4510-9001-ae12d8629f70,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac\"" Mar 7 02:10:08.796459 containerd[1463]: time="2026-03-07T02:10:08.796356874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 02:10:08.940411 systemd-networkd[1382]: vxlan.calico: Link UP Mar 7 02:10:08.940425 systemd-networkd[1382]: vxlan.calico: Gained carrier Mar 7 02:10:09.441884 containerd[1463]: time="2026-03-07T02:10:09.441750874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:09.443350 containerd[1463]: time="2026-03-07T02:10:09.443189531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 02:10:09.444887 containerd[1463]: time="2026-03-07T02:10:09.444766247Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:09.448346 containerd[1463]: time="2026-03-07T02:10:09.448176085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:09.449188 containerd[1463]: time="2026-03-07T02:10:09.449129265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 652.719382ms" Mar 7 02:10:09.449188 containerd[1463]: time="2026-03-07T02:10:09.449184228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 02:10:09.455250 containerd[1463]: time="2026-03-07T02:10:09.455180029Z" level=info msg="CreateContainer within sandbox \"5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 02:10:09.472300 containerd[1463]: time="2026-03-07T02:10:09.472187249Z" level=info msg="CreateContainer within sandbox \"5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"130ad59202585f508745b54d0f1d0b3c62e90fa4af6e8eb65837c5d43a7599ed\"" Mar 7 02:10:09.473178 containerd[1463]: time="2026-03-07T02:10:09.473050586Z" level=info msg="StartContainer for \"130ad59202585f508745b54d0f1d0b3c62e90fa4af6e8eb65837c5d43a7599ed\"" Mar 7 02:10:09.529215 systemd[1]: Started cri-containerd-130ad59202585f508745b54d0f1d0b3c62e90fa4af6e8eb65837c5d43a7599ed.scope - libcontainer container 130ad59202585f508745b54d0f1d0b3c62e90fa4af6e8eb65837c5d43a7599ed. Mar 7 02:10:09.592009 containerd[1463]: time="2026-03-07T02:10:09.591891870Z" level=info msg="StartContainer for \"130ad59202585f508745b54d0f1d0b3c62e90fa4af6e8eb65837c5d43a7599ed\" returns successfully" Mar 7 02:10:09.593805 containerd[1463]: time="2026-03-07T02:10:09.593675320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 02:10:09.737120 systemd-networkd[1382]: calid87d8a16bb5: Gained IPv6LL Mar 7 02:10:09.742197 kubelet[2513]: I0307 02:10:09.742142 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33ead7d-4e97-44b6-9819-df4b9af4cca6" path="/var/lib/kubelet/pods/c33ead7d-4e97-44b6-9819-df4b9af4cca6/volumes" Mar 7 02:10:10.058209 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Mar 7 02:10:10.379632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186408595.mount: Deactivated successfully. Mar 7 02:10:10.406879 containerd[1463]: time="2026-03-07T02:10:10.406709111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:10.407800 containerd[1463]: time="2026-03-07T02:10:10.407757968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 02:10:10.409068 containerd[1463]: time="2026-03-07T02:10:10.409033537Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:10.412541 containerd[1463]: time="2026-03-07T02:10:10.412496203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:10.413303 containerd[1463]: time="2026-03-07T02:10:10.413263416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 819.462162ms" Mar 7 02:10:10.413303 containerd[1463]: time="2026-03-07T02:10:10.413301217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 02:10:10.420175 containerd[1463]: time="2026-03-07T02:10:10.420125061Z" level=info msg="CreateContainer within sandbox \"5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 02:10:10.436682 containerd[1463]: time="2026-03-07T02:10:10.436528637Z" level=info msg="CreateContainer within sandbox \"5e0f11275bab192a55340a5530a29492bd5bdbd0da45acf5c0c9ef243e0fcdac\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4f507a7a329f15ef11c4b102e2b6e9373c9a77c727780cc06704595bbb21784e\"" Mar 7 02:10:10.437479 containerd[1463]: time="2026-03-07T02:10:10.437447441Z" level=info msg="StartContainer for \"4f507a7a329f15ef11c4b102e2b6e9373c9a77c727780cc06704595bbb21784e\"" Mar 7 02:10:10.481208 systemd[1]: Started cri-containerd-4f507a7a329f15ef11c4b102e2b6e9373c9a77c727780cc06704595bbb21784e.scope - libcontainer container 4f507a7a329f15ef11c4b102e2b6e9373c9a77c727780cc06704595bbb21784e. Mar 7 02:10:10.524176 containerd[1463]: time="2026-03-07T02:10:10.524029442Z" level=info msg="StartContainer for \"4f507a7a329f15ef11c4b102e2b6e9373c9a77c727780cc06704595bbb21784e\" returns successfully" Mar 7 02:10:10.984914 kubelet[2513]: I0307 02:10:10.983931 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b6d6f5498-fcjbq" podStartSLOduration=1.365165443 podStartE2EDuration="2.983916649s" podCreationTimestamp="2026-03-07 02:10:08 +0000 UTC" firstStartedPulling="2026-03-07 02:10:08.795701076 +0000 UTC m=+35.171216202" lastFinishedPulling="2026-03-07 02:10:10.414452282 +0000 UTC m=+36.789967408" observedRunningTime="2026-03-07 02:10:10.983294569 +0000 UTC m=+37.358809755" watchObservedRunningTime="2026-03-07 02:10:10.983916649 +0000 UTC m=+37.359431775" Mar 7 02:10:14.821748 systemd[1]: Started sshd@7-10.0.0.4:22-10.0.0.1:40746.service - OpenSSH per-connection server daemon (10.0.0.1:40746). Mar 7 02:10:14.900027 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 40746 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:14.901871 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:14.906971 systemd-logind[1453]: New session 8 of user core. Mar 7 02:10:14.914223 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 02:10:15.062512 sshd[4368]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:15.066574 systemd[1]: sshd@7-10.0.0.4:22-10.0.0.1:40746.service: Deactivated successfully. Mar 7 02:10:15.068436 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 02:10:15.069227 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Mar 7 02:10:15.070514 systemd-logind[1453]: Removed session 8. Mar 7 02:10:16.738239 containerd[1463]: time="2026-03-07T02:10:16.737902889Z" level=info msg="StopPodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\"" Mar 7 02:10:16.739118 containerd[1463]: time="2026-03-07T02:10:16.737902964Z" level=info msg="StopPodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\"" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.793 [INFO][4414] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.795 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" iface="eth0" netns="/var/run/netns/cni-fc8edd47-ec07-47a2-6b80-0865dbe8b4b0" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.795 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" iface="eth0" netns="/var/run/netns/cni-fc8edd47-ec07-47a2-6b80-0865dbe8b4b0" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.795 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" iface="eth0" netns="/var/run/netns/cni-fc8edd47-ec07-47a2-6b80-0865dbe8b4b0" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.795 [INFO][4414] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.796 [INFO][4414] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.817 [INFO][4433] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.817 [INFO][4433] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.817 [INFO][4433] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.828 [WARNING][4433] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.828 [INFO][4433] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.830 [INFO][4433] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:16.835238 containerd[1463]: 2026-03-07 02:10:16.832 [INFO][4414] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:16.839033 containerd[1463]: time="2026-03-07T02:10:16.837133419Z" level=info msg="TearDown network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" successfully" Mar 7 02:10:16.839033 containerd[1463]: time="2026-03-07T02:10:16.838906658Z" level=info msg="StopPodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" returns successfully" Mar 7 02:10:16.840273 systemd[1]: run-netns-cni\x2dfc8edd47\x2dec07\x2d47a2\x2d6b80\x2d0865dbe8b4b0.mount: Deactivated successfully. Mar 7 02:10:16.843924 containerd[1463]: time="2026-03-07T02:10:16.843800057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-qn22h,Uid:84c2f7ea-3510-4273-9bd1-34a319c955aa,Namespace:calico-system,Attempt:1,}" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.792 [INFO][4416] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.792 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" iface="eth0" netns="/var/run/netns/cni-313d5310-ce4b-b503-198b-e8056fbe09b2" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.792 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" iface="eth0" netns="/var/run/netns/cni-313d5310-ce4b-b503-198b-e8056fbe09b2" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.792 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" iface="eth0" netns="/var/run/netns/cni-313d5310-ce4b-b503-198b-e8056fbe09b2" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.792 [INFO][4416] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.792 [INFO][4416] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.824 [INFO][4430] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.824 [INFO][4430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.830 [INFO][4430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.836 [WARNING][4430] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.836 [INFO][4430] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.838 [INFO][4430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:16.846295 containerd[1463]: 2026-03-07 02:10:16.843 [INFO][4416] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:16.846911 containerd[1463]: time="2026-03-07T02:10:16.846881704Z" level=info msg="TearDown network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" successfully" Mar 7 02:10:16.846911 containerd[1463]: time="2026-03-07T02:10:16.846905217Z" level=info msg="StopPodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" returns successfully" Mar 7 02:10:16.850224 systemd[1]: run-netns-cni\x2d313d5310\x2dce4b\x2db503\x2d198b\x2de8056fbe09b2.mount: Deactivated successfully. Mar 7 02:10:16.852976 containerd[1463]: time="2026-03-07T02:10:16.852892359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wcmzt,Uid:c3a031a0-0380-4645-b6c5-339ff7d45b89,Namespace:calico-system,Attempt:1,}" Mar 7 02:10:16.984661 systemd-networkd[1382]: cali161ee84be9a: Link UP Mar 7 02:10:16.984984 systemd-networkd[1382]: cali161ee84be9a: Gained carrier Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.911 [INFO][4448] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0 calico-apiserver-dff7f645b- calico-system 84c2f7ea-3510-4273-9bd1-34a319c955aa 994 0 2026-03-07 02:09:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dff7f645b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dff7f645b-qn22h eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali161ee84be9a [] [] }} ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.912 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.943 [INFO][4476] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" HandleID="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.950 [INFO][4476] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" HandleID="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-dff7f645b-qn22h", "timestamp":"2026-03-07 02:10:16.943722311 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001dadc0)} Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.950 [INFO][4476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.950 [INFO][4476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.950 [INFO][4476] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.953 [INFO][4476] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.957 [INFO][4476] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.962 [INFO][4476] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.964 [INFO][4476] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.966 [INFO][4476] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.966 [INFO][4476] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.968 [INFO][4476] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77 Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.973 [INFO][4476] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.978 [INFO][4476] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.978 [INFO][4476] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" host="localhost" Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.978 [INFO][4476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:17.002309 containerd[1463]: 2026-03-07 02:10:16.978 [INFO][4476] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" HandleID="k8s-pod-network.ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.002889 containerd[1463]: 2026-03-07 02:10:16.980 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"84c2f7ea-3510-4273-9bd1-34a319c955aa", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dff7f645b-qn22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali161ee84be9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:17.002889 containerd[1463]: 2026-03-07 02:10:16.981 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.002889 containerd[1463]: 2026-03-07 02:10:16.981 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali161ee84be9a ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.002889 containerd[1463]: 2026-03-07 02:10:16.985 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.002889 containerd[1463]: 2026-03-07 02:10:16.985 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"84c2f7ea-3510-4273-9bd1-34a319c955aa", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77", Pod:"calico-apiserver-dff7f645b-qn22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali161ee84be9a", MAC:"4a:e8:25:f6:27:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:17.002889 containerd[1463]: 2026-03-07 02:10:17.000 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-qn22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:17.026327 containerd[1463]: time="2026-03-07T02:10:17.026124814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:17.026327 containerd[1463]: time="2026-03-07T02:10:17.026272169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:17.026449 containerd[1463]: time="2026-03-07T02:10:17.026320960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:17.026715 containerd[1463]: time="2026-03-07T02:10:17.026468665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:17.054219 systemd[1]: Started cri-containerd-ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77.scope - libcontainer container ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77. Mar 7 02:10:17.067335 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:17.091384 systemd-networkd[1382]: cali18eed88e107: Link UP Mar 7 02:10:17.092250 systemd-networkd[1382]: cali18eed88e107: Gained carrier Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.915 [INFO][4457] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0 goldmane-cccfbd5cf- calico-system c3a031a0-0380-4645-b6c5-339ff7d45b89 993 0 2026-03-07 02:09:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-wcmzt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali18eed88e107 [] [] }} ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.915 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.942 [INFO][4479] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" HandleID="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.950 [INFO][4479] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" HandleID="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003697d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-wcmzt", "timestamp":"2026-03-07 02:10:16.942280767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004bb1e0)} Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.950 [INFO][4479] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.978 [INFO][4479] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:16.978 [INFO][4479] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.053 [INFO][4479] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.059 [INFO][4479] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.065 [INFO][4479] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.066 [INFO][4479] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.069 [INFO][4479] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.069 [INFO][4479] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.071 [INFO][4479] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835 Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.075 [INFO][4479] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.081 [INFO][4479] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.081 [INFO][4479] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" host="localhost" Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.081 [INFO][4479] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:17.104461 containerd[1463]: 2026-03-07 02:10:17.081 [INFO][4479] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" HandleID="k8s-pod-network.722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.105050 containerd[1463]: 2026-03-07 02:10:17.085 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c3a031a0-0380-4645-b6c5-339ff7d45b89", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-wcmzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18eed88e107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:17.105050 containerd[1463]: 2026-03-07 02:10:17.085 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.105050 containerd[1463]: 2026-03-07 02:10:17.085 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18eed88e107 ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.105050 containerd[1463]: 2026-03-07 02:10:17.089 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.105050 containerd[1463]: 2026-03-07 02:10:17.090 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c3a031a0-0380-4645-b6c5-339ff7d45b89", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835", Pod:"goldmane-cccfbd5cf-wcmzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18eed88e107", MAC:"e2:ca:5e:bb:9e:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:17.105050 containerd[1463]: 2026-03-07 02:10:17.101 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wcmzt" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:17.109691 containerd[1463]: time="2026-03-07T02:10:17.109658686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-qn22h,Uid:84c2f7ea-3510-4273-9bd1-34a319c955aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77\"" Mar 7 02:10:17.114327 containerd[1463]: time="2026-03-07T02:10:17.113159838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 02:10:17.126531 containerd[1463]: time="2026-03-07T02:10:17.126405096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:17.126531 containerd[1463]: time="2026-03-07T02:10:17.126449149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:17.128982 containerd[1463]: time="2026-03-07T02:10:17.127140217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:17.128982 containerd[1463]: time="2026-03-07T02:10:17.127221780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:17.154052 systemd[1]: Started cri-containerd-722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835.scope - libcontainer container 722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835. Mar 7 02:10:17.167994 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:17.192419 containerd[1463]: time="2026-03-07T02:10:17.192354772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wcmzt,Uid:c3a031a0-0380-4645-b6c5-339ff7d45b89,Namespace:calico-system,Attempt:1,} returns sandbox id \"722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835\"" Mar 7 02:10:17.738912 containerd[1463]: time="2026-03-07T02:10:17.738714586Z" level=info msg="StopPodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\"" Mar 7 02:10:17.739494 containerd[1463]: time="2026-03-07T02:10:17.738900113Z" level=info msg="StopPodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\"" Mar 7 02:10:17.740227 containerd[1463]: time="2026-03-07T02:10:17.739720454Z" level=info msg="StopPodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\"" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.832 [INFO][4647] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.832 [INFO][4647] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" iface="eth0" netns="/var/run/netns/cni-19e13fec-ae52-e0cf-4450-117040bc47b8" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.832 [INFO][4647] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" iface="eth0" netns="/var/run/netns/cni-19e13fec-ae52-e0cf-4450-117040bc47b8" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.833 [INFO][4647] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" iface="eth0" netns="/var/run/netns/cni-19e13fec-ae52-e0cf-4450-117040bc47b8" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.833 [INFO][4647] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.833 [INFO][4647] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.868 [INFO][4678] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.869 [INFO][4678] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.869 [INFO][4678] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.876 [WARNING][4678] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.876 [INFO][4678] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.878 [INFO][4678] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:17.884132 containerd[1463]: 2026-03-07 02:10:17.881 [INFO][4647] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:17.884432 containerd[1463]: time="2026-03-07T02:10:17.884237441Z" level=info msg="TearDown network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" successfully" Mar 7 02:10:17.884432 containerd[1463]: time="2026-03-07T02:10:17.884258229Z" level=info msg="StopPodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" returns successfully" Mar 7 02:10:17.887283 kubelet[2513]: E0307 02:10:17.887234 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:17.889989 systemd[1]: run-netns-cni\x2d19e13fec\x2dae52\x2de0cf\x2d4450\x2d117040bc47b8.mount: Deactivated successfully. Mar 7 02:10:17.890515 containerd[1463]: time="2026-03-07T02:10:17.890419074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pcjrv,Uid:86a63514-f924-4601-8280-ce64e2c76969,Namespace:kube-system,Attempt:1,}" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.817 [INFO][4648] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.817 [INFO][4648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" iface="eth0" netns="/var/run/netns/cni-7963bd04-aa70-f41c-6f9d-11f4a2139efe" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.818 [INFO][4648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" iface="eth0" netns="/var/run/netns/cni-7963bd04-aa70-f41c-6f9d-11f4a2139efe" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.818 [INFO][4648] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" iface="eth0" netns="/var/run/netns/cni-7963bd04-aa70-f41c-6f9d-11f4a2139efe" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.819 [INFO][4648] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.819 [INFO][4648] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.883 [INFO][4672] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.884 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.884 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.895 [WARNING][4672] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.895 [INFO][4672] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.901 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:17.920293 containerd[1463]: 2026-03-07 02:10:17.917 [INFO][4648] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:17.921254 containerd[1463]: time="2026-03-07T02:10:17.921055776Z" level=info msg="TearDown network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" successfully" Mar 7 02:10:17.921254 containerd[1463]: time="2026-03-07T02:10:17.921091542Z" level=info msg="StopPodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" returns successfully" Mar 7 02:10:17.924495 systemd[1]: run-netns-cni\x2d7963bd04\x2daa70\x2df41c\x2d6f9d\x2d11f4a2139efe.mount: Deactivated successfully. Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.807 [INFO][4640] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.810 [INFO][4640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" iface="eth0" netns="/var/run/netns/cni-f6c84a80-eaf8-2c42-1597-54bf4a1e4fad" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.811 [INFO][4640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" iface="eth0" netns="/var/run/netns/cni-f6c84a80-eaf8-2c42-1597-54bf4a1e4fad" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.811 [INFO][4640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" iface="eth0" netns="/var/run/netns/cni-f6c84a80-eaf8-2c42-1597-54bf4a1e4fad" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.811 [INFO][4640] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.811 [INFO][4640] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.891 [INFO][4666] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.892 [INFO][4666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.901 [INFO][4666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.907 [WARNING][4666] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.907 [INFO][4666] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.909 [INFO][4666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:17.926463 containerd[1463]: 2026-03-07 02:10:17.919 [INFO][4640] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:17.927325 containerd[1463]: time="2026-03-07T02:10:17.927298755Z" level=info msg="TearDown network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" successfully" Mar 7 02:10:17.927414 containerd[1463]: time="2026-03-07T02:10:17.927387492Z" level=info msg="StopPodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" returns successfully" Mar 7 02:10:17.927724 containerd[1463]: time="2026-03-07T02:10:17.927651191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-tvtx5,Uid:64ba9c77-5311-407e-91f4-43c5b17b302e,Namespace:calico-system,Attempt:1,}" Mar 7 02:10:17.931404 containerd[1463]: time="2026-03-07T02:10:17.931381190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59487fcc84-nrqln,Uid:1cc2f24d-5c0d-45e8-8bc7-308383f07cd0,Namespace:calico-system,Attempt:1,}" Mar 7 02:10:18.058755 systemd-networkd[1382]: cali161ee84be9a: Gained IPv6LL Mar 7 02:10:18.067495 systemd-networkd[1382]: cali2b72397878f: Link UP Mar 7 02:10:18.072638 systemd-networkd[1382]: cali2b72397878f: Gained carrier Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:17.949 [INFO][4695] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--pcjrv-eth0 coredns-66bc5c9577- kube-system 86a63514-f924-4601-8280-ce64e2c76969 1011 0 2026-03-07 02:09:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-pcjrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b72397878f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:17.949 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:17.991 [INFO][4710] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" HandleID="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.006 [INFO][4710] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" HandleID="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-pcjrv", "timestamp":"2026-03-07 02:10:17.991051269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00072e000)} Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.006 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.006 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.006 [INFO][4710] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.009 [INFO][4710] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.015 [INFO][4710] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.027 [INFO][4710] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.029 [INFO][4710] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.033 [INFO][4710] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.033 [INFO][4710] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.036 [INFO][4710] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5 Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.041 [INFO][4710] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.050 [INFO][4710] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.050 [INFO][4710] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" host="localhost" Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.050 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:18.103357 containerd[1463]: 2026-03-07 02:10:18.050 [INFO][4710] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" HandleID="k8s-pod-network.37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.104438 containerd[1463]: 2026-03-07 02:10:18.056 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pcjrv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a63514-f924-4601-8280-ce64e2c76969", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-pcjrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b72397878f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:18.104438 containerd[1463]: 2026-03-07 02:10:18.056 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.104438 containerd[1463]: 2026-03-07 02:10:18.056 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b72397878f ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.104438 containerd[1463]: 2026-03-07 02:10:18.077 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.104438 containerd[1463]: 2026-03-07 02:10:18.079 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pcjrv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a63514-f924-4601-8280-ce64e2c76969", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5", Pod:"coredns-66bc5c9577-pcjrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b72397878f", MAC:"2a:2b:4e:94:62:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:18.104438 containerd[1463]: 2026-03-07 02:10:18.096 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5" Namespace="kube-system" Pod="coredns-66bc5c9577-pcjrv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:18.166036 systemd-networkd[1382]: caliddee4edc036: Link UP Mar 7 02:10:18.167184 systemd-networkd[1382]: caliddee4edc036: Gained carrier Mar 7 02:10:18.184682 containerd[1463]: time="2026-03-07T02:10:18.183718084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:18.184682 containerd[1463]: time="2026-03-07T02:10:18.183926552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:18.184682 containerd[1463]: time="2026-03-07T02:10:18.183944605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:18.184682 containerd[1463]: time="2026-03-07T02:10:18.184026478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.025 [INFO][4709] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0 calico-apiserver-dff7f645b- calico-system 64ba9c77-5311-407e-91f4-43c5b17b302e 1009 0 2026-03-07 02:09:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dff7f645b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dff7f645b-tvtx5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliddee4edc036 [] [] }} ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.025 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.085 [INFO][4748] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" HandleID="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.097 [INFO][4748] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" HandleID="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006c3ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-dff7f645b-tvtx5", "timestamp":"2026-03-07 02:10:18.085560507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000198580)} Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.098 [INFO][4748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.098 [INFO][4748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.098 [INFO][4748] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.112 [INFO][4748] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.119 [INFO][4748] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.125 [INFO][4748] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.131 [INFO][4748] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.136 [INFO][4748] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.136 [INFO][4748] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.139 [INFO][4748] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.148 [INFO][4748] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.155 [INFO][4748] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.155 [INFO][4748] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" host="localhost" Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.155 [INFO][4748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:18.189810 containerd[1463]: 2026-03-07 02:10:18.156 [INFO][4748] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" HandleID="k8s-pod-network.d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.190339 containerd[1463]: 2026-03-07 02:10:18.161 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"64ba9c77-5311-407e-91f4-43c5b17b302e", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dff7f645b-tvtx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddee4edc036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:18.190339 containerd[1463]: 2026-03-07 02:10:18.161 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.190339 containerd[1463]: 2026-03-07 02:10:18.161 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddee4edc036 ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.190339 containerd[1463]: 2026-03-07 02:10:18.169 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.190339 containerd[1463]: 2026-03-07 02:10:18.170 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"64ba9c77-5311-407e-91f4-43c5b17b302e", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f", Pod:"calico-apiserver-dff7f645b-tvtx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddee4edc036", MAC:"16:fd:cf:5a:cd:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:18.190339 containerd[1463]: 2026-03-07 02:10:18.182 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f" Namespace="calico-system" Pod="calico-apiserver-dff7f645b-tvtx5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:18.206091 systemd[1]: Started cri-containerd-37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5.scope - libcontainer container 37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5. Mar 7 02:10:18.227308 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:18.231710 containerd[1463]: time="2026-03-07T02:10:18.231525383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:18.231710 containerd[1463]: time="2026-03-07T02:10:18.231577851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:18.231710 containerd[1463]: time="2026-03-07T02:10:18.231629627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:18.239491 containerd[1463]: time="2026-03-07T02:10:18.234116367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:18.264196 containerd[1463]: time="2026-03-07T02:10:18.264102372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pcjrv,Uid:86a63514-f924-4601-8280-ce64e2c76969,Namespace:kube-system,Attempt:1,} returns sandbox id \"37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5\"" Mar 7 02:10:18.265537 kubelet[2513]: E0307 02:10:18.265509 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:18.276571 containerd[1463]: time="2026-03-07T02:10:18.276344976Z" level=info msg="CreateContainer within sandbox \"37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:10:18.279904 systemd-networkd[1382]: cali2746861d2d8: Link UP Mar 7 02:10:18.282506 systemd-networkd[1382]: cali2746861d2d8: Gained carrier Mar 7 02:10:18.283036 systemd[1]: Started cri-containerd-d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f.scope - libcontainer container d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f. Mar 7 02:10:18.305011 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.027 [INFO][4713] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0 calico-kube-controllers-59487fcc84- calico-system 1cc2f24d-5c0d-45e8-8bc7-308383f07cd0 1008 0 2026-03-07 02:09:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59487fcc84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59487fcc84-nrqln eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2746861d2d8 [] [] }} ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.028 [INFO][4713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.102 [INFO][4749] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" HandleID="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.113 [INFO][4749] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" HandleID="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003422d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59487fcc84-nrqln", "timestamp":"2026-03-07 02:10:18.102743947 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f82c0)} Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.113 [INFO][4749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.156 [INFO][4749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.157 [INFO][4749] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.212 [INFO][4749] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.224 [INFO][4749] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.233 [INFO][4749] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.236 [INFO][4749] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.240 [INFO][4749] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.240 [INFO][4749] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.242 [INFO][4749] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045 Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.248 [INFO][4749] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.259 [INFO][4749] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.259 [INFO][4749] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" host="localhost" Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.259 [INFO][4749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:18.305687 containerd[1463]: 2026-03-07 02:10:18.260 [INFO][4749] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" HandleID="k8s-pod-network.6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.306299 containerd[1463]: 2026-03-07 02:10:18.270 [INFO][4713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0", GenerateName:"calico-kube-controllers-59487fcc84-", Namespace:"calico-system", SelfLink:"", UID:"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59487fcc84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59487fcc84-nrqln", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2746861d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:18.306299 containerd[1463]: 2026-03-07 02:10:18.270 [INFO][4713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.306299 containerd[1463]: 2026-03-07 02:10:18.270 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2746861d2d8 ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.306299 containerd[1463]: 2026-03-07 02:10:18.284 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.306299 containerd[1463]: 2026-03-07 02:10:18.287 [INFO][4713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0", GenerateName:"calico-kube-controllers-59487fcc84-", Namespace:"calico-system", SelfLink:"", UID:"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59487fcc84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045", Pod:"calico-kube-controllers-59487fcc84-nrqln", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2746861d2d8", MAC:"e2:3a:66:c6:01:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:18.306299 containerd[1463]: 2026-03-07 02:10:18.299 [INFO][4713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045" Namespace="calico-system" Pod="calico-kube-controllers-59487fcc84-nrqln" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:18.319416 containerd[1463]: time="2026-03-07T02:10:18.319273518Z" level=info msg="CreateContainer within sandbox \"37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5056cd03f6bb47ec38e70a9c99aabe2e31be42b35966f17f99b3a608f792505\"" Mar 7 02:10:18.321459 containerd[1463]: time="2026-03-07T02:10:18.321412815Z" level=info msg="StartContainer for \"a5056cd03f6bb47ec38e70a9c99aabe2e31be42b35966f17f99b3a608f792505\"" Mar 7 02:10:18.341150 containerd[1463]: time="2026-03-07T02:10:18.341023165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:18.342507 containerd[1463]: time="2026-03-07T02:10:18.342418388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:18.342507 containerd[1463]: time="2026-03-07T02:10:18.342471598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:18.342714 containerd[1463]: time="2026-03-07T02:10:18.342565312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:18.363455 systemd[1]: Started cri-containerd-a5056cd03f6bb47ec38e70a9c99aabe2e31be42b35966f17f99b3a608f792505.scope - libcontainer container a5056cd03f6bb47ec38e70a9c99aabe2e31be42b35966f17f99b3a608f792505. Mar 7 02:10:18.376497 systemd[1]: Started cri-containerd-6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045.scope - libcontainer container 6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045. Mar 7 02:10:18.384027 containerd[1463]: time="2026-03-07T02:10:18.383918286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff7f645b-tvtx5,Uid:64ba9c77-5311-407e-91f4-43c5b17b302e,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f\"" Mar 7 02:10:18.414102 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:18.431165 containerd[1463]: time="2026-03-07T02:10:18.431102270Z" level=info msg="StartContainer for \"a5056cd03f6bb47ec38e70a9c99aabe2e31be42b35966f17f99b3a608f792505\" returns successfully" Mar 7 02:10:18.508035 containerd[1463]: time="2026-03-07T02:10:18.507945023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59487fcc84-nrqln,Uid:1cc2f24d-5c0d-45e8-8bc7-308383f07cd0,Namespace:calico-system,Attempt:1,} returns sandbox id \"6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045\"" Mar 7 02:10:18.553142 kubelet[2513]: I0307 02:10:18.553024 2513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:10:18.762068 systemd-networkd[1382]: cali18eed88e107: Gained IPv6LL Mar 7 02:10:18.850568 systemd[1]: run-netns-cni\x2df6c84a80\x2deaf8\x2d2c42\x2d1597\x2d54bf4a1e4fad.mount: Deactivated successfully. Mar 7 02:10:19.012524 kubelet[2513]: E0307 02:10:19.012418 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:19.045172 kubelet[2513]: I0307 02:10:19.044911 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pcjrv" podStartSLOduration=40.044894694 podStartE2EDuration="40.044894694s" podCreationTimestamp="2026-03-07 02:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:10:19.028269337 +0000 UTC m=+45.403784494" watchObservedRunningTime="2026-03-07 02:10:19.044894694 +0000 UTC m=+45.420409820" Mar 7 02:10:19.127672 containerd[1463]: time="2026-03-07T02:10:19.127564494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:19.128747 containerd[1463]: time="2026-03-07T02:10:19.128704447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 02:10:19.129882 containerd[1463]: time="2026-03-07T02:10:19.129774484Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:19.132404 containerd[1463]: time="2026-03-07T02:10:19.132329736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:19.133326 containerd[1463]: time="2026-03-07T02:10:19.133253830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.020067643s" Mar 7 02:10:19.133326 containerd[1463]: time="2026-03-07T02:10:19.133293754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 02:10:19.134498 containerd[1463]: time="2026-03-07T02:10:19.134307301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 02:10:19.137577 containerd[1463]: time="2026-03-07T02:10:19.137479756Z" level=info msg="CreateContainer within sandbox \"ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 02:10:19.153446 containerd[1463]: time="2026-03-07T02:10:19.153291206Z" level=info msg="CreateContainer within sandbox \"ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5322e403029abb3eec6e35cac541ce31f5959df0020fbcb19f2c142a0d6adfa7\"" Mar 7 02:10:19.155551 containerd[1463]: time="2026-03-07T02:10:19.155515697Z" level=info msg="StartContainer for \"5322e403029abb3eec6e35cac541ce31f5959df0020fbcb19f2c142a0d6adfa7\"" Mar 7 02:10:19.199050 systemd[1]: Started cri-containerd-5322e403029abb3eec6e35cac541ce31f5959df0020fbcb19f2c142a0d6adfa7.scope - libcontainer container 5322e403029abb3eec6e35cac541ce31f5959df0020fbcb19f2c142a0d6adfa7. Mar 7 02:10:19.310546 containerd[1463]: time="2026-03-07T02:10:19.310132673Z" level=info msg="StartContainer for \"5322e403029abb3eec6e35cac541ce31f5959df0020fbcb19f2c142a0d6adfa7\" returns successfully" Mar 7 02:10:19.337128 systemd-networkd[1382]: cali2b72397878f: Gained IPv6LL Mar 7 02:10:19.740216 containerd[1463]: time="2026-03-07T02:10:19.740061181Z" level=info msg="StopPodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\"" Mar 7 02:10:19.842461 systemd[1]: run-containerd-runc-k8s.io-5322e403029abb3eec6e35cac541ce31f5959df0020fbcb19f2c142a0d6adfa7-runc.rNQeJY.mount: Deactivated successfully. Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.839 [INFO][5095] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.839 [INFO][5095] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" iface="eth0" netns="/var/run/netns/cni-1cffc088-a6f6-514a-1e5d-014337137a38" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.840 [INFO][5095] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" iface="eth0" netns="/var/run/netns/cni-1cffc088-a6f6-514a-1e5d-014337137a38" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.840 [INFO][5095] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" iface="eth0" netns="/var/run/netns/cni-1cffc088-a6f6-514a-1e5d-014337137a38" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.841 [INFO][5095] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.841 [INFO][5095] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.885 [INFO][5104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.886 [INFO][5104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.886 [INFO][5104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.893 [WARNING][5104] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.894 [INFO][5104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.896 [INFO][5104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:19.904996 containerd[1463]: 2026-03-07 02:10:19.901 [INFO][5095] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:19.905727 containerd[1463]: time="2026-03-07T02:10:19.905357620Z" level=info msg="TearDown network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" successfully" Mar 7 02:10:19.905727 containerd[1463]: time="2026-03-07T02:10:19.905399137Z" level=info msg="StopPodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" returns successfully" Mar 7 02:10:19.908169 systemd[1]: run-netns-cni\x2d1cffc088\x2da6f6\x2d514a\x2d1e5d\x2d014337137a38.mount: Deactivated successfully. Mar 7 02:10:19.913724 systemd-networkd[1382]: caliddee4edc036: Gained IPv6LL Mar 7 02:10:19.949871 kubelet[2513]: E0307 02:10:19.949379 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:19.950302 containerd[1463]: time="2026-03-07T02:10:19.950254930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7cwdz,Uid:46d14d7c-35d0-47e0-aa91-219c755e8d1d,Namespace:kube-system,Attempt:1,}" Mar 7 02:10:20.054784 kubelet[2513]: E0307 02:10:20.054670 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:20.071175 kubelet[2513]: I0307 02:10:20.069729 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-dff7f645b-qn22h" podStartSLOduration=30.04696494 podStartE2EDuration="32.069713591s" podCreationTimestamp="2026-03-07 02:09:48 +0000 UTC" firstStartedPulling="2026-03-07 02:10:17.111418028 +0000 UTC m=+43.486933154" lastFinishedPulling="2026-03-07 02:10:19.134166679 +0000 UTC m=+45.509681805" observedRunningTime="2026-03-07 02:10:20.069178582 +0000 UTC m=+46.444693708" watchObservedRunningTime="2026-03-07 02:10:20.069713591 +0000 UTC m=+46.445228716" Mar 7 02:10:20.099021 systemd[1]: Started sshd@8-10.0.0.4:22-10.0.0.1:44900.service - OpenSSH per-connection server daemon (10.0.0.1:44900). Mar 7 02:10:20.106025 systemd-networkd[1382]: cali2746861d2d8: Gained IPv6LL Mar 7 02:10:20.192194 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 44900 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:20.195419 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:20.201801 systemd-logind[1453]: New session 9 of user core. Mar 7 02:10:20.211014 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 02:10:20.341946 systemd-networkd[1382]: calia24815c3ef0: Link UP Mar 7 02:10:20.345503 systemd-networkd[1382]: calia24815c3ef0: Gained carrier Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.139 [INFO][5121] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--7cwdz-eth0 coredns-66bc5c9577- kube-system 46d14d7c-35d0-47e0-aa91-219c755e8d1d 1060 0 2026-03-07 02:09:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-7cwdz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia24815c3ef0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.139 [INFO][5121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.256 [INFO][5145] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" HandleID="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.267 [INFO][5145] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" HandleID="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138c20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-7cwdz", "timestamp":"2026-03-07 02:10:20.256433729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000492840)} Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.268 [INFO][5145] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.268 [INFO][5145] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.268 [INFO][5145] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.276 [INFO][5145] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.284 [INFO][5145] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.294 [INFO][5145] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.297 [INFO][5145] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.302 [INFO][5145] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.302 [INFO][5145] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.306 [INFO][5145] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.313 [INFO][5145] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.322 [INFO][5145] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.322 [INFO][5145] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" host="localhost" Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.322 [INFO][5145] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:20.371406 containerd[1463]: 2026-03-07 02:10:20.322 [INFO][5145] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" HandleID="k8s-pod-network.4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.372747 containerd[1463]: 2026-03-07 02:10:20.335 [INFO][5121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7cwdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"46d14d7c-35d0-47e0-aa91-219c755e8d1d", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-7cwdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24815c3ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:20.372747 containerd[1463]: 2026-03-07 02:10:20.335 [INFO][5121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.372747 containerd[1463]: 2026-03-07 02:10:20.335 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia24815c3ef0 ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.372747 containerd[1463]: 2026-03-07 02:10:20.346 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.372747 containerd[1463]: 2026-03-07 02:10:20.348 [INFO][5121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7cwdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"46d14d7c-35d0-47e0-aa91-219c755e8d1d", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a", Pod:"coredns-66bc5c9577-7cwdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24815c3ef0", MAC:"82:c3:0b:8a:0a:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:20.372747 containerd[1463]: 2026-03-07 02:10:20.366 [INFO][5121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a" Namespace="kube-system" Pod="coredns-66bc5c9577-7cwdz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:20.406917 sshd[5134]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:20.410769 systemd[1]: sshd@8-10.0.0.4:22-10.0.0.1:44900.service: Deactivated successfully. Mar 7 02:10:20.412514 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 02:10:20.417902 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Mar 7 02:10:20.419889 systemd-logind[1453]: Removed session 9. Mar 7 02:10:20.442461 containerd[1463]: time="2026-03-07T02:10:20.442164978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:20.444165 containerd[1463]: time="2026-03-07T02:10:20.444031961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:20.444344 containerd[1463]: time="2026-03-07T02:10:20.444243907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:20.445143 containerd[1463]: time="2026-03-07T02:10:20.444982646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:20.479019 systemd[1]: Started cri-containerd-4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a.scope - libcontainer container 4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a. Mar 7 02:10:20.500528 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:20.529875 containerd[1463]: time="2026-03-07T02:10:20.529220032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7cwdz,Uid:46d14d7c-35d0-47e0-aa91-219c755e8d1d,Namespace:kube-system,Attempt:1,} returns sandbox id \"4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a\"" Mar 7 02:10:20.531160 kubelet[2513]: E0307 02:10:20.531106 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:20.537467 containerd[1463]: time="2026-03-07T02:10:20.537399454Z" level=info msg="CreateContainer within sandbox \"4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:10:20.559830 containerd[1463]: time="2026-03-07T02:10:20.559709265Z" level=info msg="CreateContainer within sandbox \"4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad91bab4425ab28ae6f76a0f77aaa4ebac8957b3a961ae6fd0f84d12fac0ff09\"" Mar 7 02:10:20.561877 containerd[1463]: time="2026-03-07T02:10:20.560769285Z" level=info msg="StartContainer for \"ad91bab4425ab28ae6f76a0f77aaa4ebac8957b3a961ae6fd0f84d12fac0ff09\"" Mar 7 02:10:20.607106 systemd[1]: Started cri-containerd-ad91bab4425ab28ae6f76a0f77aaa4ebac8957b3a961ae6fd0f84d12fac0ff09.scope - libcontainer container ad91bab4425ab28ae6f76a0f77aaa4ebac8957b3a961ae6fd0f84d12fac0ff09. Mar 7 02:10:20.680915 containerd[1463]: time="2026-03-07T02:10:20.680490104Z" level=info msg="StartContainer for \"ad91bab4425ab28ae6f76a0f77aaa4ebac8957b3a961ae6fd0f84d12fac0ff09\" returns successfully" Mar 7 02:10:20.845184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042165421.mount: Deactivated successfully. Mar 7 02:10:21.060188 kubelet[2513]: I0307 02:10:21.060145 2513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:10:21.061024 kubelet[2513]: E0307 02:10:21.060350 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:21.061024 kubelet[2513]: E0307 02:10:21.060719 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:21.078772 kubelet[2513]: I0307 02:10:21.078689 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7cwdz" podStartSLOduration=42.078668633 podStartE2EDuration="42.078668633s" podCreationTimestamp="2026-03-07 02:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:10:21.077310167 +0000 UTC m=+47.452825334" watchObservedRunningTime="2026-03-07 02:10:21.078668633 +0000 UTC m=+47.454183759" Mar 7 02:10:21.261172 containerd[1463]: time="2026-03-07T02:10:21.259241203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:21.261172 containerd[1463]: time="2026-03-07T02:10:21.260973296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 02:10:21.271322 containerd[1463]: time="2026-03-07T02:10:21.271010541Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:21.284942 containerd[1463]: time="2026-03-07T02:10:21.284782963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:21.285460 containerd[1463]: time="2026-03-07T02:10:21.285401306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.151069149s" Mar 7 02:10:21.285460 containerd[1463]: time="2026-03-07T02:10:21.285427605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 02:10:21.289158 containerd[1463]: time="2026-03-07T02:10:21.288947877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 02:10:21.294255 containerd[1463]: time="2026-03-07T02:10:21.294000636Z" level=info msg="CreateContainer within sandbox \"722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 02:10:21.333697 containerd[1463]: time="2026-03-07T02:10:21.333529762Z" level=info msg="CreateContainer within sandbox \"722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a225f96a477943655f03020c2e45df08ee2bf63e9116e82475b51321321c6f35\"" Mar 7 02:10:21.334336 containerd[1463]: time="2026-03-07T02:10:21.334234466Z" level=info msg="StartContainer for \"a225f96a477943655f03020c2e45df08ee2bf63e9116e82475b51321321c6f35\"" Mar 7 02:10:21.386075 systemd[1]: Started cri-containerd-a225f96a477943655f03020c2e45df08ee2bf63e9116e82475b51321321c6f35.scope - libcontainer container a225f96a477943655f03020c2e45df08ee2bf63e9116e82475b51321321c6f35. Mar 7 02:10:21.433057 containerd[1463]: time="2026-03-07T02:10:21.432964903Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:21.434149 containerd[1463]: time="2026-03-07T02:10:21.434073501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 02:10:21.437236 containerd[1463]: time="2026-03-07T02:10:21.437128161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 148.067964ms" Mar 7 02:10:21.437236 containerd[1463]: time="2026-03-07T02:10:21.437207990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 02:10:21.458156 containerd[1463]: time="2026-03-07T02:10:21.456970257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 02:10:21.459422 containerd[1463]: time="2026-03-07T02:10:21.459075526Z" level=info msg="StartContainer for \"a225f96a477943655f03020c2e45df08ee2bf63e9116e82475b51321321c6f35\" returns successfully" Mar 7 02:10:21.463412 containerd[1463]: time="2026-03-07T02:10:21.463304473Z" level=info msg="CreateContainer within sandbox \"d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 02:10:21.489207 containerd[1463]: time="2026-03-07T02:10:21.489125532Z" level=info msg="CreateContainer within sandbox \"d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c00cfd8bf71c1a84447fafa3bf862273ed6055460a024c6696c4acb25c4133c0\"" Mar 7 02:10:21.490487 containerd[1463]: time="2026-03-07T02:10:21.490361958Z" level=info msg="StartContainer for \"c00cfd8bf71c1a84447fafa3bf862273ed6055460a024c6696c4acb25c4133c0\"" Mar 7 02:10:21.528177 systemd[1]: Started cri-containerd-c00cfd8bf71c1a84447fafa3bf862273ed6055460a024c6696c4acb25c4133c0.scope - libcontainer container c00cfd8bf71c1a84447fafa3bf862273ed6055460a024c6696c4acb25c4133c0. Mar 7 02:10:21.592482 containerd[1463]: time="2026-03-07T02:10:21.592188440Z" level=info msg="StartContainer for \"c00cfd8bf71c1a84447fafa3bf862273ed6055460a024c6696c4acb25c4133c0\" returns successfully" Mar 7 02:10:21.739377 containerd[1463]: time="2026-03-07T02:10:21.739277420Z" level=info msg="StopPodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\"" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.821 [INFO][5357] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.821 [INFO][5357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" iface="eth0" netns="/var/run/netns/cni-b0824f41-139b-1a0b-93a4-6ca56951f231" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.823 [INFO][5357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" iface="eth0" netns="/var/run/netns/cni-b0824f41-139b-1a0b-93a4-6ca56951f231" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.824 [INFO][5357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" iface="eth0" netns="/var/run/netns/cni-b0824f41-139b-1a0b-93a4-6ca56951f231" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.824 [INFO][5357] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.824 [INFO][5357] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.862 [INFO][5366] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.862 [INFO][5366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.862 [INFO][5366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.872 [WARNING][5366] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.872 [INFO][5366] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.874 [INFO][5366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:21.884360 containerd[1463]: 2026-03-07 02:10:21.879 [INFO][5357] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:21.886643 containerd[1463]: time="2026-03-07T02:10:21.885973373Z" level=info msg="TearDown network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" successfully" Mar 7 02:10:21.886643 containerd[1463]: time="2026-03-07T02:10:21.886231836Z" level=info msg="StopPodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" returns successfully" Mar 7 02:10:21.888502 systemd[1]: run-netns-cni\x2db0824f41\x2d139b\x2d1a0b\x2d93a4\x2d6ca56951f231.mount: Deactivated successfully. Mar 7 02:10:21.892923 containerd[1463]: time="2026-03-07T02:10:21.892739739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8n85,Uid:635ed81a-2a01-425c-84a2-97d0d48a5575,Namespace:calico-system,Attempt:1,}" Mar 7 02:10:22.025032 systemd-networkd[1382]: calia24815c3ef0: Gained IPv6LL Mar 7 02:10:22.074974 kubelet[2513]: E0307 02:10:22.074209 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:22.114579 kubelet[2513]: I0307 02:10:22.114357 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-dff7f645b-tvtx5" podStartSLOduration=31.046454244 podStartE2EDuration="34.114337834s" podCreationTimestamp="2026-03-07 02:09:48 +0000 UTC" firstStartedPulling="2026-03-07 02:10:18.388899841 +0000 UTC m=+44.764414968" lastFinishedPulling="2026-03-07 02:10:21.456783422 +0000 UTC m=+47.832298558" observedRunningTime="2026-03-07 02:10:22.112503218 +0000 UTC m=+48.488018345" watchObservedRunningTime="2026-03-07 02:10:22.114337834 +0000 UTC m=+48.489852961" Mar 7 02:10:22.165768 systemd-networkd[1382]: cali2d8e8347b89: Link UP Mar 7 02:10:22.168657 systemd-networkd[1382]: cali2d8e8347b89: Gained carrier Mar 7 02:10:22.191084 kubelet[2513]: I0307 02:10:22.190076 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-wcmzt" podStartSLOduration=30.096315872 podStartE2EDuration="34.190058876s" podCreationTimestamp="2026-03-07 02:09:48 +0000 UTC" firstStartedPulling="2026-03-07 02:10:17.194976356 +0000 UTC m=+43.570491481" lastFinishedPulling="2026-03-07 02:10:21.288719358 +0000 UTC m=+47.664234485" observedRunningTime="2026-03-07 02:10:22.188579472 +0000 UTC m=+48.564094598" watchObservedRunningTime="2026-03-07 02:10:22.190058876 +0000 UTC m=+48.565574002" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:21.956 [INFO][5373] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--z8n85-eth0 csi-node-driver- calico-system 635ed81a-2a01-425c-84a2-97d0d48a5575 1092 0 2026-03-07 02:09:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-z8n85 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2d8e8347b89 [] [] }} ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:21.956 [INFO][5373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.002 [INFO][5388] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" HandleID="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.012 [INFO][5388] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" HandleID="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-z8n85", "timestamp":"2026-03-07 02:10:22.002870951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ac420)} Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.012 [INFO][5388] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.013 [INFO][5388] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.013 [INFO][5388] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.017 [INFO][5388] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.025 [INFO][5388] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.034 [INFO][5388] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.037 [INFO][5388] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.041 [INFO][5388] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.041 [INFO][5388] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.044 [INFO][5388] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9 Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.068 [INFO][5388] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.122 [INFO][5388] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.125 [INFO][5388] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" host="localhost" Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.127 [INFO][5388] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:22.203643 containerd[1463]: 2026-03-07 02:10:22.127 [INFO][5388] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" HandleID="k8s-pod-network.885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.204572 containerd[1463]: 2026-03-07 02:10:22.156 [INFO][5373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z8n85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"635ed81a-2a01-425c-84a2-97d0d48a5575", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-z8n85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e8347b89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:22.204572 containerd[1463]: 2026-03-07 02:10:22.156 [INFO][5373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.204572 containerd[1463]: 2026-03-07 02:10:22.157 [INFO][5373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d8e8347b89 ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.204572 containerd[1463]: 2026-03-07 02:10:22.171 [INFO][5373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.204572 containerd[1463]: 2026-03-07 02:10:22.174 [INFO][5373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z8n85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"635ed81a-2a01-425c-84a2-97d0d48a5575", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9", Pod:"csi-node-driver-z8n85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e8347b89", MAC:"36:de:2e:71:52:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:22.204572 containerd[1463]: 2026-03-07 02:10:22.199 [INFO][5373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9" Namespace="calico-system" Pod="csi-node-driver-z8n85" WorkloadEndpoint="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:22.257689 containerd[1463]: time="2026-03-07T02:10:22.257241322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:10:22.257689 containerd[1463]: time="2026-03-07T02:10:22.257373219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:10:22.257689 containerd[1463]: time="2026-03-07T02:10:22.257398426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:22.257689 containerd[1463]: time="2026-03-07T02:10:22.257532716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:10:22.301284 systemd[1]: Started cri-containerd-885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9.scope - libcontainer container 885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9. Mar 7 02:10:22.325767 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:10:22.352203 containerd[1463]: time="2026-03-07T02:10:22.351916766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8n85,Uid:635ed81a-2a01-425c-84a2-97d0d48a5575,Namespace:calico-system,Attempt:1,} returns sandbox id \"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9\"" Mar 7 02:10:23.104915 kubelet[2513]: I0307 02:10:23.104766 2513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:10:23.106412 kubelet[2513]: E0307 02:10:23.106322 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:23.497671 systemd-networkd[1382]: cali2d8e8347b89: Gained IPv6LL Mar 7 02:10:24.108646 kubelet[2513]: E0307 02:10:24.108546 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:25.344755 containerd[1463]: time="2026-03-07T02:10:25.344581829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:25.346531 containerd[1463]: time="2026-03-07T02:10:25.346423635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 02:10:25.348672 containerd[1463]: time="2026-03-07T02:10:25.348567978Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:25.353299 containerd[1463]: time="2026-03-07T02:10:25.353153485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:25.354705 containerd[1463]: time="2026-03-07T02:10:25.354643878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.897602928s" Mar 7 02:10:25.354805 containerd[1463]: time="2026-03-07T02:10:25.354707787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 02:10:25.356376 containerd[1463]: time="2026-03-07T02:10:25.356249384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 02:10:25.399579 containerd[1463]: time="2026-03-07T02:10:25.399484832Z" level=info msg="CreateContainer within sandbox \"6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 02:10:25.443787 containerd[1463]: time="2026-03-07T02:10:25.442167259Z" level=info msg="CreateContainer within sandbox \"6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"48b74d4bb6d5b21f61ebcc148f8cfcbc656581b3d15cfcd0b5488d30029a2a62\"" Mar 7 02:10:25.443787 containerd[1463]: time="2026-03-07T02:10:25.443407406Z" level=info msg="StartContainer for \"48b74d4bb6d5b21f61ebcc148f8cfcbc656581b3d15cfcd0b5488d30029a2a62\"" Mar 7 02:10:25.453996 systemd[1]: Started sshd@9-10.0.0.4:22-10.0.0.1:44918.service - OpenSSH per-connection server daemon (10.0.0.1:44918). Mar 7 02:10:25.570377 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 44918 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:25.573748 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:25.585718 systemd-logind[1453]: New session 10 of user core. Mar 7 02:10:25.596520 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:10:25.624186 systemd[1]: Started cri-containerd-48b74d4bb6d5b21f61ebcc148f8cfcbc656581b3d15cfcd0b5488d30029a2a62.scope - libcontainer container 48b74d4bb6d5b21f61ebcc148f8cfcbc656581b3d15cfcd0b5488d30029a2a62. Mar 7 02:10:25.760318 containerd[1463]: time="2026-03-07T02:10:25.760236907Z" level=info msg="StartContainer for \"48b74d4bb6d5b21f61ebcc148f8cfcbc656581b3d15cfcd0b5488d30029a2a62\" returns successfully" Mar 7 02:10:26.118173 sshd[5523]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:26.124757 systemd[1]: sshd@9-10.0.0.4:22-10.0.0.1:44918.service: Deactivated successfully. Mar 7 02:10:26.129559 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:10:26.132672 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:10:26.135736 systemd-logind[1453]: Removed session 10. Mar 7 02:10:26.234475 kubelet[2513]: I0307 02:10:26.234336 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59487fcc84-nrqln" podStartSLOduration=30.387870557 podStartE2EDuration="37.23431726s" podCreationTimestamp="2026-03-07 02:09:49 +0000 UTC" firstStartedPulling="2026-03-07 02:10:18.509653119 +0000 UTC m=+44.885168246" lastFinishedPulling="2026-03-07 02:10:25.356099813 +0000 UTC m=+51.731614949" observedRunningTime="2026-03-07 02:10:26.150086611 +0000 UTC m=+52.525601757" watchObservedRunningTime="2026-03-07 02:10:26.23431726 +0000 UTC m=+52.609832386" Mar 7 02:10:26.850119 containerd[1463]: time="2026-03-07T02:10:26.850011978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:26.851475 containerd[1463]: time="2026-03-07T02:10:26.851333768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 02:10:26.852941 containerd[1463]: time="2026-03-07T02:10:26.852883750Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:26.855877 containerd[1463]: time="2026-03-07T02:10:26.855726426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:26.856903 containerd[1463]: time="2026-03-07T02:10:26.856797109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.500487522s" Mar 7 02:10:26.856972 containerd[1463]: time="2026-03-07T02:10:26.856910932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 02:10:26.862575 containerd[1463]: time="2026-03-07T02:10:26.862539848Z" level=info msg="CreateContainer within sandbox \"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 02:10:26.925084 containerd[1463]: time="2026-03-07T02:10:26.925017358Z" level=info msg="CreateContainer within sandbox \"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3c61db61cc64e6e8a4340ab4b925077b4b91ea0966e4749a03047666388d14ea\"" Mar 7 02:10:26.926306 containerd[1463]: time="2026-03-07T02:10:26.926240791Z" level=info msg="StartContainer for \"3c61db61cc64e6e8a4340ab4b925077b4b91ea0966e4749a03047666388d14ea\"" Mar 7 02:10:26.980185 systemd[1]: Started cri-containerd-3c61db61cc64e6e8a4340ab4b925077b4b91ea0966e4749a03047666388d14ea.scope - libcontainer container 3c61db61cc64e6e8a4340ab4b925077b4b91ea0966e4749a03047666388d14ea. Mar 7 02:10:27.027863 containerd[1463]: time="2026-03-07T02:10:27.027690420Z" level=info msg="StartContainer for \"3c61db61cc64e6e8a4340ab4b925077b4b91ea0966e4749a03047666388d14ea\" returns successfully" Mar 7 02:10:27.029938 containerd[1463]: time="2026-03-07T02:10:27.029898279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 02:10:27.752287 containerd[1463]: time="2026-03-07T02:10:27.752171988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:27.753575 containerd[1463]: time="2026-03-07T02:10:27.753486633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 02:10:27.755007 containerd[1463]: time="2026-03-07T02:10:27.754906125Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:27.757792 containerd[1463]: time="2026-03-07T02:10:27.757726108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:10:27.758561 containerd[1463]: time="2026-03-07T02:10:27.758530998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 728.605046ms" Mar 7 02:10:27.758677 containerd[1463]: time="2026-03-07T02:10:27.758561475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 02:10:27.764166 containerd[1463]: time="2026-03-07T02:10:27.763587759Z" level=info msg="CreateContainer within sandbox \"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 02:10:27.782565 containerd[1463]: time="2026-03-07T02:10:27.782476927Z" level=info msg="CreateContainer within sandbox \"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f5f02219e42ceaa22baa5876095b0b1a7463a2365a4e5e3471403412770745f8\"" Mar 7 02:10:27.783356 containerd[1463]: time="2026-03-07T02:10:27.783309481Z" level=info msg="StartContainer for \"f5f02219e42ceaa22baa5876095b0b1a7463a2365a4e5e3471403412770745f8\"" Mar 7 02:10:27.841115 systemd[1]: run-containerd-runc-k8s.io-f5f02219e42ceaa22baa5876095b0b1a7463a2365a4e5e3471403412770745f8-runc.tOY1Qe.mount: Deactivated successfully. Mar 7 02:10:27.851181 systemd[1]: Started cri-containerd-f5f02219e42ceaa22baa5876095b0b1a7463a2365a4e5e3471403412770745f8.scope - libcontainer container f5f02219e42ceaa22baa5876095b0b1a7463a2365a4e5e3471403412770745f8. Mar 7 02:10:27.908704 containerd[1463]: time="2026-03-07T02:10:27.908166252Z" level=info msg="StartContainer for \"f5f02219e42ceaa22baa5876095b0b1a7463a2365a4e5e3471403412770745f8\" returns successfully" Mar 7 02:10:28.149547 kubelet[2513]: I0307 02:10:28.149357 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z8n85" podStartSLOduration=33.745205401 podStartE2EDuration="39.149344197s" podCreationTimestamp="2026-03-07 02:09:49 +0000 UTC" firstStartedPulling="2026-03-07 02:10:22.355197915 +0000 UTC m=+48.730713041" lastFinishedPulling="2026-03-07 02:10:27.759336711 +0000 UTC m=+54.134851837" observedRunningTime="2026-03-07 02:10:28.149111983 +0000 UTC m=+54.524627109" watchObservedRunningTime="2026-03-07 02:10:28.149344197 +0000 UTC m=+54.524859323" Mar 7 02:10:28.815745 kubelet[2513]: I0307 02:10:28.815695 2513 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 02:10:28.816745 kubelet[2513]: I0307 02:10:28.816680 2513 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 02:10:31.139212 systemd[1]: Started sshd@10-10.0.0.4:22-10.0.0.1:40326.service - OpenSSH per-connection server daemon (10.0.0.1:40326). Mar 7 02:10:31.193981 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 40326 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:31.195777 sshd[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:31.201725 systemd-logind[1453]: New session 11 of user core. Mar 7 02:10:31.209058 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:10:31.374312 sshd[5726]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:31.385546 systemd[1]: sshd@10-10.0.0.4:22-10.0.0.1:40326.service: Deactivated successfully. Mar 7 02:10:31.387781 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:10:31.389471 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:10:31.397300 systemd[1]: Started sshd@11-10.0.0.4:22-10.0.0.1:40328.service - OpenSSH per-connection server daemon (10.0.0.1:40328). Mar 7 02:10:31.398747 systemd-logind[1453]: Removed session 11. Mar 7 02:10:31.431394 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:31.433234 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:31.438867 systemd-logind[1453]: New session 12 of user core. Mar 7 02:10:31.446033 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:10:31.674212 sshd[5741]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:31.678549 systemd[1]: sshd@11-10.0.0.4:22-10.0.0.1:40328.service: Deactivated successfully. Mar 7 02:10:31.682385 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:10:31.686272 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:10:31.689169 systemd-logind[1453]: Removed session 12. Mar 7 02:10:31.756723 systemd[1]: Started sshd@12-10.0.0.4:22-10.0.0.1:40344.service - OpenSSH per-connection server daemon (10.0.0.1:40344). Mar 7 02:10:31.794712 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 40344 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:31.797120 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:31.803526 systemd-logind[1453]: New session 13 of user core. Mar 7 02:10:31.812143 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:10:31.969414 sshd[5754]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:31.973249 systemd[1]: sshd@12-10.0.0.4:22-10.0.0.1:40344.service: Deactivated successfully. Mar 7 02:10:31.976251 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:10:31.979138 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:10:31.980771 systemd-logind[1453]: Removed session 13. Mar 7 02:10:33.714960 containerd[1463]: time="2026-03-07T02:10:33.714785309Z" level=info msg="StopPodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\"" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.785 [WARNING][5785] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pcjrv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a63514-f924-4601-8280-ce64e2c76969", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5", Pod:"coredns-66bc5c9577-pcjrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b72397878f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.785 [INFO][5785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.786 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" iface="eth0" netns="" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.786 [INFO][5785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.786 [INFO][5785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.836 [INFO][5796] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.837 [INFO][5796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.838 [INFO][5796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.846 [WARNING][5796] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.846 [INFO][5796] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.848 [INFO][5796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:33.855550 containerd[1463]: 2026-03-07 02:10:33.851 [INFO][5785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.855550 containerd[1463]: time="2026-03-07T02:10:33.855488095Z" level=info msg="TearDown network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" successfully" Mar 7 02:10:33.855550 containerd[1463]: time="2026-03-07T02:10:33.855514694Z" level=info msg="StopPodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" returns successfully" Mar 7 02:10:33.903839 containerd[1463]: time="2026-03-07T02:10:33.903768240Z" level=info msg="RemovePodSandbox for \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\"" Mar 7 02:10:33.905596 containerd[1463]: time="2026-03-07T02:10:33.905564112Z" level=info msg="Forcibly stopping sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\"" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.944 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pcjrv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a63514-f924-4601-8280-ce64e2c76969", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37fe1889a53543212d821dc494ff8c48a64ecc5642ede5b245057cff6337f2d5", Pod:"coredns-66bc5c9577-pcjrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b72397878f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.944 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.945 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" iface="eth0" netns="" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.945 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.945 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.975 [INFO][5821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.975 [INFO][5821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.975 [INFO][5821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.982 [WARNING][5821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.982 [INFO][5821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" HandleID="k8s-pod-network.90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Workload="localhost-k8s-coredns--66bc5c9577--pcjrv-eth0" Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.983 [INFO][5821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:33.989949 containerd[1463]: 2026-03-07 02:10:33.986 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0" Mar 7 02:10:33.989949 containerd[1463]: time="2026-03-07T02:10:33.989471552Z" level=info msg="TearDown network for sandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" successfully" Mar 7 02:10:34.023177 containerd[1463]: time="2026-03-07T02:10:34.023096119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:34.023282 containerd[1463]: time="2026-03-07T02:10:34.023188331Z" level=info msg="RemovePodSandbox \"90b8fc631f527b9535199336d6b7a85af33307f8819de27170b9fa0e0d4f0da0\" returns successfully" Mar 7 02:10:34.034188 containerd[1463]: time="2026-03-07T02:10:34.034150182Z" level=info msg="StopPodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\"" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.090 [WARNING][5839] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z8n85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"635ed81a-2a01-425c-84a2-97d0d48a5575", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9", Pod:"csi-node-driver-z8n85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e8347b89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.090 [INFO][5839] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.090 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" iface="eth0" netns="" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.090 [INFO][5839] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.090 [INFO][5839] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.120 [INFO][5848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.120 [INFO][5848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.120 [INFO][5848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.127 [WARNING][5848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.127 [INFO][5848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.129 [INFO][5848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.134546 containerd[1463]: 2026-03-07 02:10:34.132 [INFO][5839] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.135103 containerd[1463]: time="2026-03-07T02:10:34.134554494Z" level=info msg="TearDown network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" successfully" Mar 7 02:10:34.135103 containerd[1463]: time="2026-03-07T02:10:34.134587767Z" level=info msg="StopPodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" returns successfully" Mar 7 02:10:34.135236 containerd[1463]: time="2026-03-07T02:10:34.135174810Z" level=info msg="RemovePodSandbox for \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\"" Mar 7 02:10:34.135236 containerd[1463]: time="2026-03-07T02:10:34.135231887Z" level=info msg="Forcibly stopping sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\"" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.182 [WARNING][5865] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z8n85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"635ed81a-2a01-425c-84a2-97d0d48a5575", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"885fefb9583c85f66396b5ebcf068f5c71d757938727a8141e6b9046e78f53a9", Pod:"csi-node-driver-z8n85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d8e8347b89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.182 [INFO][5865] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.182 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" iface="eth0" netns="" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.182 [INFO][5865] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.182 [INFO][5865] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.213 [INFO][5873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.213 [INFO][5873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.213 [INFO][5873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.222 [WARNING][5873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.223 [INFO][5873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" HandleID="k8s-pod-network.1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Workload="localhost-k8s-csi--node--driver--z8n85-eth0" Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.224 [INFO][5873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.231418 containerd[1463]: 2026-03-07 02:10:34.228 [INFO][5865] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe" Mar 7 02:10:34.231949 containerd[1463]: time="2026-03-07T02:10:34.231442164Z" level=info msg="TearDown network for sandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" successfully" Mar 7 02:10:34.236383 containerd[1463]: time="2026-03-07T02:10:34.236323134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:34.236383 containerd[1463]: time="2026-03-07T02:10:34.236374440Z" level=info msg="RemovePodSandbox \"1eb20c21793d419851037ef3ddfe294c72ac1b0b24df07e5296a1149c7a29dbe\" returns successfully" Mar 7 02:10:34.237286 containerd[1463]: time="2026-03-07T02:10:34.237247620Z" level=info msg="StopPodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\"" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.285 [WARNING][5892] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7cwdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"46d14d7c-35d0-47e0-aa91-219c755e8d1d", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a", Pod:"coredns-66bc5c9577-7cwdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24815c3ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.285 [INFO][5892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.285 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" iface="eth0" netns="" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.285 [INFO][5892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.285 [INFO][5892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.321 [INFO][5900] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.321 [INFO][5900] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.321 [INFO][5900] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.327 [WARNING][5900] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.327 [INFO][5900] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.329 [INFO][5900] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.334857 containerd[1463]: 2026-03-07 02:10:34.331 [INFO][5892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.334857 containerd[1463]: time="2026-03-07T02:10:34.334775600Z" level=info msg="TearDown network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" successfully" Mar 7 02:10:34.334857 containerd[1463]: time="2026-03-07T02:10:34.334799345Z" level=info msg="StopPodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" returns successfully" Mar 7 02:10:34.335521 containerd[1463]: time="2026-03-07T02:10:34.335492008Z" level=info msg="RemovePodSandbox for \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\"" Mar 7 02:10:34.335668 containerd[1463]: time="2026-03-07T02:10:34.335532354Z" level=info msg="Forcibly stopping sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\"" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.377 [WARNING][5917] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7cwdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"46d14d7c-35d0-47e0-aa91-219c755e8d1d", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae85c44e5b0bfe1817d569170e49c76ff54b9388d010719136c24f78e62017a", Pod:"coredns-66bc5c9577-7cwdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24815c3ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.377 [INFO][5917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.378 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" iface="eth0" netns="" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.378 [INFO][5917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.378 [INFO][5917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.403 [INFO][5925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.403 [INFO][5925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.403 [INFO][5925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.410 [WARNING][5925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.410 [INFO][5925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" HandleID="k8s-pod-network.8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Workload="localhost-k8s-coredns--66bc5c9577--7cwdz-eth0" Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.412 [INFO][5925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.417531 containerd[1463]: 2026-03-07 02:10:34.415 [INFO][5917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7" Mar 7 02:10:34.417997 containerd[1463]: time="2026-03-07T02:10:34.417568894Z" level=info msg="TearDown network for sandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" successfully" Mar 7 02:10:34.435966 containerd[1463]: time="2026-03-07T02:10:34.435885524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:34.435966 containerd[1463]: time="2026-03-07T02:10:34.435960775Z" level=info msg="RemovePodSandbox \"8419b653bbd0be80046d21336710eb9677ed86254425e59231a48489f23de5f7\" returns successfully" Mar 7 02:10:34.436934 containerd[1463]: time="2026-03-07T02:10:34.436883847Z" level=info msg="StopPodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\"" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.477 [WARNING][5943] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" WorkloadEndpoint="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.477 [INFO][5943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.477 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" iface="eth0" netns="" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.477 [INFO][5943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.477 [INFO][5943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.504 [INFO][5952] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.504 [INFO][5952] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.504 [INFO][5952] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.513 [WARNING][5952] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.513 [INFO][5952] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.516 [INFO][5952] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.527799 containerd[1463]: 2026-03-07 02:10:34.523 [INFO][5943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.528367 containerd[1463]: time="2026-03-07T02:10:34.527956629Z" level=info msg="TearDown network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" successfully" Mar 7 02:10:34.528367 containerd[1463]: time="2026-03-07T02:10:34.527993558Z" level=info msg="StopPodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" returns successfully" Mar 7 02:10:34.528739 containerd[1463]: time="2026-03-07T02:10:34.528696838Z" level=info msg="RemovePodSandbox for \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\"" Mar 7 02:10:34.528884 containerd[1463]: time="2026-03-07T02:10:34.528746982Z" level=info msg="Forcibly stopping sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\"" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.583 [WARNING][5970] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" WorkloadEndpoint="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.583 [INFO][5970] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.583 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" iface="eth0" netns="" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.583 [INFO][5970] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.584 [INFO][5970] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.626 [INFO][5978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.626 [INFO][5978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.626 [INFO][5978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.640 [WARNING][5978] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.640 [INFO][5978] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" HandleID="k8s-pod-network.85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Workload="localhost-k8s-whisker--75c74c5c--746nx-eth0" Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.645 [INFO][5978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.654093 containerd[1463]: 2026-03-07 02:10:34.648 [INFO][5970] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377" Mar 7 02:10:34.654562 containerd[1463]: time="2026-03-07T02:10:34.654087494Z" level=info msg="TearDown network for sandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" successfully" Mar 7 02:10:34.662028 containerd[1463]: time="2026-03-07T02:10:34.661962580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:34.662191 containerd[1463]: time="2026-03-07T02:10:34.662066083Z" level=info msg="RemovePodSandbox \"85cce3fd0289decbe1147f3d27da3e9dc3c4431141b30bbf2ec95a7e117f2377\" returns successfully" Mar 7 02:10:34.662931 containerd[1463]: time="2026-03-07T02:10:34.662867031Z" level=info msg="StopPodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\"" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.793 [WARNING][5996] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"64ba9c77-5311-407e-91f4-43c5b17b302e", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f", Pod:"calico-apiserver-dff7f645b-tvtx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddee4edc036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.793 [INFO][5996] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.793 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" iface="eth0" netns="" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.793 [INFO][5996] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.793 [INFO][5996] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.919 [INFO][6004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.919 [INFO][6004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.920 [INFO][6004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.932 [WARNING][6004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.932 [INFO][6004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.935 [INFO][6004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:34.947045 containerd[1463]: 2026-03-07 02:10:34.942 [INFO][5996] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:34.947045 containerd[1463]: time="2026-03-07T02:10:34.946512337Z" level=info msg="TearDown network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" successfully" Mar 7 02:10:34.947045 containerd[1463]: time="2026-03-07T02:10:34.946547853Z" level=info msg="StopPodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" returns successfully" Mar 7 02:10:34.959108 containerd[1463]: time="2026-03-07T02:10:34.957955394Z" level=info msg="RemovePodSandbox for \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\"" Mar 7 02:10:34.959108 containerd[1463]: time="2026-03-07T02:10:34.958006320Z" level=info msg="Forcibly stopping sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\"" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.080 [WARNING][6021] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"64ba9c77-5311-407e-91f4-43c5b17b302e", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0b3e63b76ebad2ba6b82d41092ca43e335b84dfe80236a0adf91607851f5d9f", Pod:"calico-apiserver-dff7f645b-tvtx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddee4edc036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.080 [INFO][6021] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.080 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" iface="eth0" netns="" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.080 [INFO][6021] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.080 [INFO][6021] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.153 [INFO][6029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.153 [INFO][6029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.154 [INFO][6029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.168 [WARNING][6029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.168 [INFO][6029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" HandleID="k8s-pod-network.c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Workload="localhost-k8s-calico--apiserver--dff7f645b--tvtx5-eth0" Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.182 [INFO][6029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:35.196801 containerd[1463]: 2026-03-07 02:10:35.190 [INFO][6021] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa" Mar 7 02:10:35.196801 containerd[1463]: time="2026-03-07T02:10:35.196678236Z" level=info msg="TearDown network for sandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" successfully" Mar 7 02:10:35.205875 containerd[1463]: time="2026-03-07T02:10:35.205705635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:35.205875 containerd[1463]: time="2026-03-07T02:10:35.205795322Z" level=info msg="RemovePodSandbox \"c3257f127c19979e6da0d803ca332527b2f071f5284bb8bcf8c20d12bcae03aa\" returns successfully" Mar 7 02:10:35.211335 containerd[1463]: time="2026-03-07T02:10:35.207937011Z" level=info msg="StopPodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\"" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.344 [WARNING][6045] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c3a031a0-0380-4645-b6c5-339ff7d45b89", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835", Pod:"goldmane-cccfbd5cf-wcmzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18eed88e107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.345 [INFO][6045] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.345 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" iface="eth0" netns="" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.345 [INFO][6045] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.345 [INFO][6045] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.396 [INFO][6054] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.396 [INFO][6054] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.396 [INFO][6054] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.410 [WARNING][6054] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.410 [INFO][6054] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.413 [INFO][6054] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:35.423130 containerd[1463]: 2026-03-07 02:10:35.418 [INFO][6045] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.424304 containerd[1463]: time="2026-03-07T02:10:35.423519616Z" level=info msg="TearDown network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" successfully" Mar 7 02:10:35.424304 containerd[1463]: time="2026-03-07T02:10:35.423558198Z" level=info msg="StopPodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" returns successfully" Mar 7 02:10:35.427087 containerd[1463]: time="2026-03-07T02:10:35.427043998Z" level=info msg="RemovePodSandbox for \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\"" Mar 7 02:10:35.427223 containerd[1463]: time="2026-03-07T02:10:35.427086878Z" level=info msg="Forcibly stopping sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\"" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.513 [WARNING][6071] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c3a031a0-0380-4645-b6c5-339ff7d45b89", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"722c6e92b014847cb88aaa35c6a231a0015aa777ad85852ec2e4baf04bcdb835", Pod:"goldmane-cccfbd5cf-wcmzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18eed88e107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.513 [INFO][6071] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.513 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" iface="eth0" netns="" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.513 [INFO][6071] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.513 [INFO][6071] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.555 [INFO][6079] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.556 [INFO][6079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.556 [INFO][6079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.567 [WARNING][6079] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.567 [INFO][6079] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" HandleID="k8s-pod-network.b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Workload="localhost-k8s-goldmane--cccfbd5cf--wcmzt-eth0" Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.571 [INFO][6079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:35.583016 containerd[1463]: 2026-03-07 02:10:35.574 [INFO][6071] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957" Mar 7 02:10:35.583016 containerd[1463]: time="2026-03-07T02:10:35.582936289Z" level=info msg="TearDown network for sandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" successfully" Mar 7 02:10:35.589890 containerd[1463]: time="2026-03-07T02:10:35.589748445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:35.590012 containerd[1463]: time="2026-03-07T02:10:35.589893205Z" level=info msg="RemovePodSandbox \"b88f5a2e813fc554af0df7ed19f365ec9421697b4a041be66ee5e78d3c43b957\" returns successfully" Mar 7 02:10:35.590617 containerd[1463]: time="2026-03-07T02:10:35.590549530Z" level=info msg="StopPodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\"" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.661 [WARNING][6097] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0", GenerateName:"calico-kube-controllers-59487fcc84-", Namespace:"calico-system", SelfLink:"", UID:"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59487fcc84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045", Pod:"calico-kube-controllers-59487fcc84-nrqln", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2746861d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.662 [INFO][6097] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.662 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" iface="eth0" netns="" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.662 [INFO][6097] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.662 [INFO][6097] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.694 [INFO][6106] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.695 [INFO][6106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.695 [INFO][6106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.705 [WARNING][6106] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.705 [INFO][6106] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.708 [INFO][6106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:35.715310 containerd[1463]: 2026-03-07 02:10:35.711 [INFO][6097] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.715310 containerd[1463]: time="2026-03-07T02:10:35.715281328Z" level=info msg="TearDown network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" successfully" Mar 7 02:10:35.715310 containerd[1463]: time="2026-03-07T02:10:35.715310743Z" level=info msg="StopPodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" returns successfully" Mar 7 02:10:35.717264 containerd[1463]: time="2026-03-07T02:10:35.717208850Z" level=info msg="RemovePodSandbox for \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\"" Mar 7 02:10:35.717264 containerd[1463]: time="2026-03-07T02:10:35.717281385Z" level=info msg="Forcibly stopping sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\"" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.790 [WARNING][6124] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0", GenerateName:"calico-kube-controllers-59487fcc84-", Namespace:"calico-system", SelfLink:"", UID:"1cc2f24d-5c0d-45e8-8bc7-308383f07cd0", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59487fcc84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6956eadab9ffa9164c8b0b61f0ea4ba43fd9c1d6cee1948fd35d1334dfdfb045", Pod:"calico-kube-controllers-59487fcc84-nrqln", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2746861d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.791 [INFO][6124] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.791 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" iface="eth0" netns="" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.791 [INFO][6124] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.791 [INFO][6124] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.832 [INFO][6132] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.832 [INFO][6132] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.832 [INFO][6132] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.847 [WARNING][6132] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.847 [INFO][6132] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" HandleID="k8s-pod-network.14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Workload="localhost-k8s-calico--kube--controllers--59487fcc84--nrqln-eth0" Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.854 [INFO][6132] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:35.865958 containerd[1463]: 2026-03-07 02:10:35.857 [INFO][6124] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b" Mar 7 02:10:35.865958 containerd[1463]: time="2026-03-07T02:10:35.863005512Z" level=info msg="TearDown network for sandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" successfully" Mar 7 02:10:35.877871 containerd[1463]: time="2026-03-07T02:10:35.877689492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:35.877871 containerd[1463]: time="2026-03-07T02:10:35.877807260Z" level=info msg="RemovePodSandbox \"14b9f7740edc1203238e608612d2e5362271e66c6e9cadc0363d516544ee4c2b\" returns successfully" Mar 7 02:10:35.878706 containerd[1463]: time="2026-03-07T02:10:35.878548738Z" level=info msg="StopPodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\"" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:35.955 [WARNING][6149] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"84c2f7ea-3510-4273-9bd1-34a319c955aa", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77", Pod:"calico-apiserver-dff7f645b-qn22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali161ee84be9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:35.956 [INFO][6149] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:35.956 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" iface="eth0" netns="" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:35.956 [INFO][6149] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:35.956 [INFO][6149] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.018 [INFO][6158] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.018 [INFO][6158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.018 [INFO][6158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.034 [WARNING][6158] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.034 [INFO][6158] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.039 [INFO][6158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:36.053354 containerd[1463]: 2026-03-07 02:10:36.047 [INFO][6149] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.053354 containerd[1463]: time="2026-03-07T02:10:36.053342164Z" level=info msg="TearDown network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" successfully" Mar 7 02:10:36.053354 containerd[1463]: time="2026-03-07T02:10:36.053376328Z" level=info msg="StopPodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" returns successfully" Mar 7 02:10:36.056802 containerd[1463]: time="2026-03-07T02:10:36.056552203Z" level=info msg="RemovePodSandbox for \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\"" Mar 7 02:10:36.056802 containerd[1463]: time="2026-03-07T02:10:36.056621402Z" level=info msg="Forcibly stopping sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\"" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.160 [WARNING][6175] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0", GenerateName:"calico-apiserver-dff7f645b-", Namespace:"calico-system", SelfLink:"", UID:"84c2f7ea-3510-4273-9bd1-34a319c955aa", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff7f645b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba6ee9a84c6c91822ff1dce23a5e9589d70bee03760cfdbf8bfccaf3301a0f77", Pod:"calico-apiserver-dff7f645b-qn22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali161ee84be9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.161 [INFO][6175] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.161 [INFO][6175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" iface="eth0" netns="" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.161 [INFO][6175] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.161 [INFO][6175] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.229 [INFO][6184] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.230 [INFO][6184] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.230 [INFO][6184] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.243 [WARNING][6184] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.243 [INFO][6184] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" HandleID="k8s-pod-network.5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Workload="localhost-k8s-calico--apiserver--dff7f645b--qn22h-eth0" Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.250 [INFO][6184] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:10:36.265134 containerd[1463]: 2026-03-07 02:10:36.258 [INFO][6175] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894" Mar 7 02:10:36.265134 containerd[1463]: time="2026-03-07T02:10:36.265110757Z" level=info msg="TearDown network for sandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" successfully" Mar 7 02:10:36.285773 containerd[1463]: time="2026-03-07T02:10:36.284992280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 02:10:36.285773 containerd[1463]: time="2026-03-07T02:10:36.285153090Z" level=info msg="RemovePodSandbox \"5aaf1070732d7a8a031ce73977174df11e674761fb3347a996633a67f3634894\" returns successfully" Mar 7 02:10:36.985172 systemd[1]: Started sshd@13-10.0.0.4:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). Mar 7 02:10:37.063427 sshd[6193]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:37.065921 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:37.071694 systemd-logind[1453]: New session 14 of user core. Mar 7 02:10:37.082096 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:10:37.273587 sshd[6193]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:37.283094 systemd[1]: sshd@13-10.0.0.4:22-10.0.0.1:40410.service: Deactivated successfully. Mar 7 02:10:37.286146 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:10:37.289016 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:10:37.297730 systemd[1]: Started sshd@14-10.0.0.4:22-10.0.0.1:40422.service - OpenSSH per-connection server daemon (10.0.0.1:40422). Mar 7 02:10:37.299358 systemd-logind[1453]: Removed session 14. Mar 7 02:10:37.342876 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 40422 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:37.345630 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:37.355425 systemd-logind[1453]: New session 15 of user core. Mar 7 02:10:37.374211 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:10:37.718806 sshd[6207]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:37.734535 systemd[1]: sshd@14-10.0.0.4:22-10.0.0.1:40422.service: Deactivated successfully. Mar 7 02:10:37.737094 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:10:37.740156 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:10:37.751503 systemd[1]: Started sshd@15-10.0.0.4:22-10.0.0.1:40432.service - OpenSSH per-connection server daemon (10.0.0.1:40432). Mar 7 02:10:37.753047 systemd-logind[1453]: Removed session 15. Mar 7 02:10:37.792166 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 40432 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:37.794242 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:37.802232 systemd-logind[1453]: New session 16 of user core. Mar 7 02:10:37.813235 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:10:38.515428 sshd[6220]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:38.531337 systemd[1]: Started sshd@16-10.0.0.4:22-10.0.0.1:40434.service - OpenSSH per-connection server daemon (10.0.0.1:40434). Mar 7 02:10:38.533348 systemd[1]: sshd@15-10.0.0.4:22-10.0.0.1:40432.service: Deactivated successfully. Mar 7 02:10:38.537261 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:10:38.541063 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:10:38.546526 systemd-logind[1453]: Removed session 16. Mar 7 02:10:38.596750 sshd[6244]: Accepted publickey for core from 10.0.0.1 port 40434 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:38.599324 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:38.607472 systemd-logind[1453]: New session 17 of user core. Mar 7 02:10:38.614061 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:10:39.250182 sshd[6244]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:39.275428 systemd[1]: sshd@16-10.0.0.4:22-10.0.0.1:40434.service: Deactivated successfully. Mar 7 02:10:39.280511 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:10:39.284110 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:10:39.296519 systemd[1]: Started sshd@17-10.0.0.4:22-10.0.0.1:40442.service - OpenSSH per-connection server daemon (10.0.0.1:40442). Mar 7 02:10:39.301722 systemd-logind[1453]: Removed session 17. Mar 7 02:10:39.383145 sshd[6258]: Accepted publickey for core from 10.0.0.1 port 40442 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:39.386313 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:39.397777 systemd-logind[1453]: New session 18 of user core. Mar 7 02:10:39.411261 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:10:39.619450 sshd[6258]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:39.629071 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:10:39.631386 systemd[1]: sshd@17-10.0.0.4:22-10.0.0.1:40442.service: Deactivated successfully. Mar 7 02:10:39.634428 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:10:39.636073 systemd-logind[1453]: Removed session 18. Mar 7 02:10:44.632943 systemd[1]: Started sshd@18-10.0.0.4:22-10.0.0.1:33734.service - OpenSSH per-connection server daemon (10.0.0.1:33734). Mar 7 02:10:44.726964 sshd[6287]: Accepted publickey for core from 10.0.0.1 port 33734 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:44.729499 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:44.739168 systemd-logind[1453]: New session 19 of user core. Mar 7 02:10:44.749112 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:10:45.137337 sshd[6287]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:45.145351 systemd[1]: sshd@18-10.0.0.4:22-10.0.0.1:33734.service: Deactivated successfully. Mar 7 02:10:45.148597 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:10:45.150366 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:10:45.152485 systemd-logind[1453]: Removed session 19. Mar 7 02:10:50.149241 systemd[1]: Started sshd@19-10.0.0.4:22-10.0.0.1:59354.service - OpenSSH per-connection server daemon (10.0.0.1:59354). Mar 7 02:10:50.199506 sshd[6338]: Accepted publickey for core from 10.0.0.1 port 59354 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:50.202235 sshd[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:50.208734 systemd-logind[1453]: New session 20 of user core. Mar 7 02:10:50.223148 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 02:10:50.382883 sshd[6338]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:50.387133 systemd[1]: sshd@19-10.0.0.4:22-10.0.0.1:59354.service: Deactivated successfully. Mar 7 02:10:50.389379 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 02:10:50.390422 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Mar 7 02:10:50.391873 systemd-logind[1453]: Removed session 20. Mar 7 02:10:51.816270 kubelet[2513]: I0307 02:10:51.816187 2513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:10:52.518408 kubelet[2513]: I0307 02:10:52.518259 2513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:10:53.743503 kubelet[2513]: E0307 02:10:53.743377 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:54.737610 kubelet[2513]: E0307 02:10:54.737534 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:10:55.403093 systemd[1]: Started sshd@20-10.0.0.4:22-10.0.0.1:59436.service - OpenSSH per-connection server daemon (10.0.0.1:59436). Mar 7 02:10:55.457124 sshd[6400]: Accepted publickey for core from 10.0.0.1 port 59436 ssh2: RSA SHA256:uk/DCSUTcaMpYhv163xyqXSkCn1v2IBmhikAysTxhDQ Mar 7 02:10:55.459887 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:10:55.466949 systemd-logind[1453]: New session 21 of user core. Mar 7 02:10:55.475071 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 02:10:55.566377 systemd[1]: run-containerd-runc-k8s.io-48b74d4bb6d5b21f61ebcc148f8cfcbc656581b3d15cfcd0b5488d30029a2a62-runc.ROP2Yk.mount: Deactivated successfully. Mar 7 02:10:55.803011 sshd[6400]: pam_unix(sshd:session): session closed for user core Mar 7 02:10:55.809397 systemd[1]: sshd@20-10.0.0.4:22-10.0.0.1:59436.service: Deactivated successfully. Mar 7 02:10:55.812228 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 02:10:55.814364 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Mar 7 02:10:55.815888 systemd-logind[1453]: Removed session 21.