Apr 21 10:38:08.905369 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:38:08.905388 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:38:08.905398 kernel: BIOS-provided physical RAM map: Apr 21 10:38:08.905404 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:38:08.905409 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 10:38:08.905414 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 10:38:08.905420 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 10:38:08.905425 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 10:38:08.905430 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 21 10:38:08.905436 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 21 10:38:08.905442 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 21 10:38:08.905447 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 21 10:38:08.905452 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 21 10:38:08.905458 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 21 10:38:08.905464 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 21 10:38:08.905470 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 10:38:08.905477 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 21 10:38:08.905482 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 21 10:38:08.905488 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 10:38:08.905493 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:38:08.905499 kernel: NX (Execute Disable) protection: active Apr 21 10:38:08.905504 kernel: APIC: Static calls initialized Apr 21 10:38:08.905544 kernel: efi: EFI v2.7 by EDK II Apr 21 10:38:08.905550 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 21 10:38:08.905555 kernel: SMBIOS 2.8 present. Apr 21 10:38:08.905561 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 21 10:38:08.905566 kernel: Hypervisor detected: KVM Apr 21 10:38:08.905573 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:38:08.905579 kernel: kvm-clock: using sched offset of 4963678990 cycles Apr 21 10:38:08.905584 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:38:08.905591 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:38:08.905596 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:38:08.905602 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:38:08.905608 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 21 10:38:08.905614 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:38:08.905620 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:38:08.905645 kernel: Using GB pages for direct mapping Apr 21 10:38:08.905655 kernel: Secure boot disabled Apr 21 10:38:08.905664 kernel: ACPI: Early table checksum verification disabled Apr 21 10:38:08.905671 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 10:38:08.905680 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:38:08.905686 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:38:08.905692 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:38:08.905699 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 10:38:08.905705 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:38:08.905711 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:38:08.905717 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:38:08.905723 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:38:08.905729 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:38:08.905735 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 10:38:08.905742 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 10:38:08.905748 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 10:38:08.905754 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 10:38:08.905760 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 10:38:08.905766 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 10:38:08.905772 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 10:38:08.905777 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 10:38:08.905783 kernel: No NUMA configuration found Apr 21 10:38:08.905789 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 21 10:38:08.905796 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 21 10:38:08.905801 kernel: Zone ranges: Apr 21 10:38:08.905806 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:38:08.905811 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 21 10:38:08.905816 kernel: Normal empty Apr 21 10:38:08.905821 kernel: Movable zone start for each node Apr 21 10:38:08.905826 kernel: Early memory node ranges Apr 21 10:38:08.905830 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:38:08.905835 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 10:38:08.905840 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 10:38:08.905846 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 21 10:38:08.905851 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 21 10:38:08.905856 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 21 10:38:08.905861 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 21 10:38:08.905866 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:38:08.905871 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:38:08.905876 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 10:38:08.905881 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:38:08.905886 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 21 10:38:08.905891 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:38:08.905897 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 21 10:38:08.905902 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:38:08.905907 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:38:08.905912 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:38:08.905917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:38:08.905922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:38:08.905927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:38:08.905931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:38:08.905936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:38:08.905943 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:38:08.905947 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:38:08.905952 kernel: TSC deadline timer available Apr 21 10:38:08.905957 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:38:08.905962 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:38:08.905967 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:38:08.905972 kernel: kvm-guest: setup PV sched yield Apr 21 10:38:08.905977 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 21 10:38:08.905982 kernel: Booting paravirtualized kernel on KVM Apr 21 10:38:08.905988 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:38:08.905993 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:38:08.905998 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:38:08.906003 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:38:08.906008 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:38:08.906013 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:38:08.906018 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:38:08.906023 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:38:08.906030 kernel: random: crng init done Apr 21 10:38:08.906035 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:38:08.906040 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:38:08.906045 kernel: Fallback order for Node 0: 0 Apr 21 10:38:08.906050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 21 10:38:08.906055 kernel: Policy zone: DMA32 Apr 21 10:38:08.906060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:38:08.906065 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 172120K reserved, 0K cma-reserved) Apr 21 10:38:08.906070 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:38:08.906076 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:38:08.906081 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:38:08.906086 kernel: Dynamic Preempt: voluntary Apr 21 10:38:08.906091 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:38:08.906102 kernel: rcu: RCU event tracing is enabled. Apr 21 10:38:08.906109 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:38:08.906114 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:38:08.906120 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:38:08.906125 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:38:08.906131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:38:08.906136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:38:08.906141 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:38:08.906148 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:38:08.906154 kernel: Console: colour dummy device 80x25 Apr 21 10:38:08.906159 kernel: printk: console [ttyS0] enabled Apr 21 10:38:08.906164 kernel: ACPI: Core revision 20230628 Apr 21 10:38:08.906170 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:38:08.906177 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:38:08.906182 kernel: x2apic enabled Apr 21 10:38:08.906187 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:38:08.906193 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:38:08.906198 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:38:08.906204 kernel: kvm-guest: setup PV IPIs Apr 21 10:38:08.906209 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:38:08.906215 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:38:08.906220 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:38:08.906227 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:38:08.906233 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:38:08.906238 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:38:08.906244 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:38:08.906249 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:38:08.906254 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:38:08.906260 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:38:08.906266 kernel: RETBleed: Vulnerable Apr 21 10:38:08.906271 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:38:08.906278 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:38:08.906283 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:38:08.906289 kernel: active return thunk: its_return_thunk Apr 21 10:38:08.906294 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:38:08.906299 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:38:08.906305 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:38:08.906310 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:38:08.906316 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:38:08.906321 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:38:08.906328 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:38:08.906334 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:38:08.906339 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:38:08.906344 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:38:08.906350 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:38:08.906355 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:38:08.906361 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:38:08.906366 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:38:08.906373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:38:08.906379 kernel: landlock: Up and running. Apr 21 10:38:08.906432 kernel: SELinux: Initializing. Apr 21 10:38:08.906438 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:38:08.906445 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:38:08.906450 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:38:08.906456 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:38:08.906462 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:38:08.906467 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:38:08.906474 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:38:08.906480 kernel: signal: max sigframe size: 3632 Apr 21 10:38:08.906486 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:38:08.906491 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:38:08.906497 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:38:08.906502 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:38:08.906531 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:38:08.906537 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:38:08.906542 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:38:08.906549 kernel: smpboot: Max logical packages: 1 Apr 21 10:38:08.906555 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:38:08.906560 kernel: devtmpfs: initialized Apr 21 10:38:08.906566 kernel: x86/mm: Memory block size: 128MB Apr 21 10:38:08.906572 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 10:38:08.906577 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 10:38:08.906583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 21 10:38:08.906588 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 10:38:08.906594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 10:38:08.906601 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:38:08.906607 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:38:08.906622 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:38:08.906647 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:38:08.906658 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:38:08.906664 kernel: audit: type=2000 audit(1776767888.298:1): state=initialized audit_enabled=0 res=1 Apr 21 10:38:08.906669 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:38:08.906674 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:38:08.906680 kernel: cpuidle: using governor menu Apr 21 10:38:08.906687 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:38:08.906693 kernel: dca service started, version 1.12.1 Apr 21 10:38:08.906698 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:38:08.906704 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:38:08.906709 kernel: PCI: Using configuration type 1 for base access Apr 21 10:38:08.906715 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:38:08.906720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:38:08.906726 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:38:08.906731 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:38:08.906738 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:38:08.906743 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:38:08.906749 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:38:08.906754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:38:08.906759 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:38:08.906765 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:38:08.906770 kernel: ACPI: Interpreter enabled Apr 21 10:38:08.906776 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:38:08.906781 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:38:08.906788 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:38:08.906793 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:38:08.906799 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:38:08.906804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:38:08.906917 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:38:08.906980 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:38:08.907035 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:38:08.907044 kernel: PCI host bridge to bus 0000:00 Apr 21 10:38:08.907101 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:38:08.907152 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:38:08.907201 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:38:08.907249 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:38:08.907297 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:38:08.907347 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 21 10:38:08.907397 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:38:08.907463 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:38:08.907561 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:38:08.907621 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 21 10:38:08.907709 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 21 10:38:08.907764 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:38:08.907819 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:38:08.907876 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:38:08.907936 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:38:08.907992 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 21 10:38:08.908047 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 21 10:38:08.908102 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 21 10:38:08.908165 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:38:08.908222 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 21 10:38:08.908277 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 21 10:38:08.908331 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 21 10:38:08.908390 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:38:08.908447 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 21 10:38:08.908502 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 21 10:38:08.908602 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 21 10:38:08.908692 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 21 10:38:08.908754 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:38:08.908809 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:38:08.908871 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:38:08.908925 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 21 10:38:08.908979 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 21 10:38:08.909038 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:38:08.909096 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 21 10:38:08.909103 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:38:08.909108 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:38:08.909114 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:38:08.909120 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:38:08.909125 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:38:08.909130 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:38:08.909136 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:38:08.909143 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:38:08.909149 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:38:08.909154 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:38:08.909160 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:38:08.909165 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:38:08.909170 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:38:08.909176 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:38:08.909181 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:38:08.909187 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:38:08.909194 kernel: iommu: Default domain type: Translated Apr 21 10:38:08.909199 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:38:08.909205 kernel: efivars: Registered efivars operations Apr 21 10:38:08.909210 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:38:08.909215 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:38:08.909221 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 10:38:08.909226 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 21 10:38:08.909232 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 21 10:38:08.909237 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 21 10:38:08.909293 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:38:08.909349 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:38:08.909403 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:38:08.909410 kernel: vgaarb: loaded Apr 21 10:38:08.909416 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:38:08.909421 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:38:08.909427 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:38:08.909432 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:38:08.909438 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:38:08.909445 kernel: pnp: PnP ACPI init Apr 21 10:38:08.909504 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:38:08.909594 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:38:08.909599 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:38:08.909605 kernel: NET: Registered PF_INET protocol family Apr 21 10:38:08.909611 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:38:08.909616 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:38:08.909622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:38:08.909665 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:38:08.909671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:38:08.909677 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:38:08.909682 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:38:08.909687 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:38:08.909693 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:38:08.909698 kernel: NET: Registered PF_XDP protocol family Apr 21 10:38:08.909761 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 21 10:38:08.909816 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 21 10:38:08.909872 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:38:08.909921 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:38:08.909969 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:38:08.910016 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:38:08.910064 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:38:08.910111 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 21 10:38:08.910118 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:38:08.910124 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:38:08.910131 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:38:08.910137 kernel: Initialise system trusted keyrings Apr 21 10:38:08.910142 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:38:08.910148 kernel: Key type asymmetric registered Apr 21 10:38:08.910153 kernel: Asymmetric key parser 'x509' registered Apr 21 10:38:08.910158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:38:08.910164 kernel: io scheduler mq-deadline registered Apr 21 10:38:08.910169 kernel: io scheduler kyber registered Apr 21 10:38:08.910176 kernel: io scheduler bfq registered Apr 21 10:38:08.910182 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:38:08.910188 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:38:08.910193 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:38:08.910199 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:38:08.910204 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:38:08.910210 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:38:08.910215 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:38:08.910221 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:38:08.910226 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:38:08.910288 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:38:08.910296 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:38:08.910344 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:38:08.910394 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:38:08 UTC (1776767888) Apr 21 10:38:08.910445 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 10:38:08.910452 kernel: intel_pstate: CPU model not supported Apr 21 10:38:08.910457 kernel: efifb: probing for efifb Apr 21 10:38:08.910464 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 21 10:38:08.910470 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 21 10:38:08.910475 kernel: efifb: scrolling: redraw Apr 21 10:38:08.910481 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 21 10:38:08.910486 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:38:08.910492 kernel: fb0: EFI VGA frame buffer device Apr 21 10:38:08.910547 kernel: pstore: Using crash dump compression: deflate Apr 21 10:38:08.910555 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:38:08.910560 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:38:08.910567 kernel: Segment Routing with IPv6 Apr 21 10:38:08.910573 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:38:08.910578 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:38:08.910584 kernel: Key type dns_resolver registered Apr 21 10:38:08.910589 kernel: IPI shorthand broadcast: enabled Apr 21 10:38:08.910595 kernel: sched_clock: Marking stable (779011191, 179228889)->(996903250, -38663170) Apr 21 10:38:08.910601 kernel: registered taskstats version 1 Apr 21 10:38:08.910606 kernel: Loading compiled-in X.509 certificates Apr 21 10:38:08.910612 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:38:08.910617 kernel: Key type .fscrypt registered Apr 21 10:38:08.910624 kernel: Key type fscrypt-provisioning registered Apr 21 10:38:08.910651 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:38:08.910660 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:38:08.910677 kernel: ima: No architecture policies found Apr 21 10:38:08.910683 kernel: clk: Disabling unused clocks Apr 21 10:38:08.910688 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:38:08.910694 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:38:08.910700 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:38:08.910706 kernel: Run /init as init process Apr 21 10:38:08.910713 kernel: with arguments: Apr 21 10:38:08.910719 kernel: /init Apr 21 10:38:08.910724 kernel: with environment: Apr 21 10:38:08.910730 kernel: HOME=/ Apr 21 10:38:08.910735 kernel: TERM=linux Apr 21 10:38:08.910743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:38:08.910780 systemd[1]: Detected virtualization kvm. Apr 21 10:38:08.910788 systemd[1]: Detected architecture x86-64. Apr 21 10:38:08.910794 systemd[1]: Running in initrd. Apr 21 10:38:08.910800 systemd[1]: No hostname configured, using default hostname. Apr 21 10:38:08.910806 systemd[1]: Hostname set to . Apr 21 10:38:08.910812 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:38:08.910821 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:38:08.910827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:38:08.910834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:38:08.910840 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:38:08.910846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:38:08.910852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:38:08.910858 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:38:08.910867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:38:08.910874 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:38:08.910880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:38:08.910886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:38:08.910892 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:38:08.910898 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:38:08.910904 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:38:08.910910 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:38:08.910918 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:38:08.910924 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:38:08.910930 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:38:08.910936 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:38:08.910942 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:38:08.910948 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:38:08.910954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:38:08.910960 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:38:08.910966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:38:08.910973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:38:08.910980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:38:08.910986 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:38:08.910992 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:38:08.910998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:38:08.911004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:38:08.911010 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:38:08.911029 systemd-journald[193]: Collecting audit messages is disabled. Apr 21 10:38:08.911046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:38:08.911052 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:38:08.911061 systemd-journald[193]: Journal started Apr 21 10:38:08.911078 systemd-journald[193]: Runtime Journal (/run/log/journal/2e0a3fe450bd48d29d4f64a47345cc10) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:38:08.913801 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:38:08.917799 systemd-modules-load[194]: Inserted module 'overlay' Apr 21 10:38:08.922734 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:38:08.928747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:38:08.933163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:38:08.937982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:38:08.943267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:38:08.949677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:38:08.954002 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:38:08.954755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:38:08.958434 kernel: Bridge firewalling registered Apr 21 10:38:08.957877 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 21 10:38:08.959162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:38:08.960121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:38:08.963099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:38:08.977013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:38:08.978935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:38:08.981240 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:38:08.985288 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:38:08.999206 dracut-cmdline[231]: dracut-dracut-053 Apr 21 10:38:09.002206 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:38:09.004502 systemd-resolved[229]: Positive Trust Anchors: Apr 21 10:38:09.004547 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:38:09.004572 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:38:09.006439 systemd-resolved[229]: Defaulting to hostname 'linux'. Apr 21 10:38:09.007168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:38:09.010041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:38:09.087580 kernel: SCSI subsystem initialized Apr 21 10:38:09.095570 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:38:09.105547 kernel: iscsi: registered transport (tcp) Apr 21 10:38:09.123560 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:38:09.123592 kernel: QLogic iSCSI HBA Driver Apr 21 10:38:09.153955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:38:09.173702 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:38:09.195641 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:38:09.195690 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:38:09.197307 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:38:09.233570 kernel: raid6: avx512x4 gen() 46933 MB/s Apr 21 10:38:09.250589 kernel: raid6: avx512x2 gen() 46168 MB/s Apr 21 10:38:09.267595 kernel: raid6: avx512x1 gen() 46035 MB/s Apr 21 10:38:09.284570 kernel: raid6: avx2x4 gen() 37583 MB/s Apr 21 10:38:09.301562 kernel: raid6: avx2x2 gen() 37211 MB/s Apr 21 10:38:09.319416 kernel: raid6: avx2x1 gen() 28943 MB/s Apr 21 10:38:09.319443 kernel: raid6: using algorithm avx512x4 gen() 46933 MB/s Apr 21 10:38:09.337432 kernel: raid6: .... xor() 10428 MB/s, rmw enabled Apr 21 10:38:09.337487 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:38:09.355565 kernel: xor: automatically using best checksumming function avx Apr 21 10:38:09.477606 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:38:09.485794 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:38:09.503711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:38:09.513059 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 21 10:38:09.515621 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:38:09.520153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:38:09.533757 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Apr 21 10:38:09.556211 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:38:09.563713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:38:09.592034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:38:09.604889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:38:09.612059 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:38:09.614210 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:38:09.616529 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:38:09.620191 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:38:09.631571 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:38:09.631600 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:38:09.634661 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:38:09.640109 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:38:09.643052 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:38:09.653434 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:38:09.653467 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:38:09.653494 kernel: GPT:9289727 != 19775487 Apr 21 10:38:09.653565 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:38:09.653574 kernel: GPT:9289727 != 19775487 Apr 21 10:38:09.653591 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:38:09.653599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:38:09.653607 kernel: AES CTR mode by8 optimization enabled Apr 21 10:38:09.643178 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:38:09.652480 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:38:09.655215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:38:09.655338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:38:09.655496 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:38:09.668743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:38:09.675106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:38:09.677024 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:38:09.681891 kernel: libata version 3.00 loaded. Apr 21 10:38:09.692106 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:38:09.692261 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:38:09.692711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:38:09.707398 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:38:09.707575 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:38:09.707678 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 21 10:38:09.707687 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (476) Apr 21 10:38:09.698132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:38:09.711569 kernel: scsi host0: ahci Apr 21 10:38:09.711754 kernel: scsi host1: ahci Apr 21 10:38:09.714557 kernel: scsi host2: ahci Apr 21 10:38:09.716570 kernel: scsi host3: ahci Apr 21 10:38:09.717080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:38:09.720180 kernel: scsi host4: ahci Apr 21 10:38:09.720288 kernel: scsi host5: ahci Apr 21 10:38:09.724719 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 21 10:38:09.724740 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 21 10:38:09.724750 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 21 10:38:09.725059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:38:09.730949 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 21 10:38:09.730961 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 21 10:38:09.730974 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 21 10:38:09.736275 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:38:09.736946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:38:09.745299 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:38:09.754660 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:38:09.771760 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:38:09.776397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:38:09.780691 disk-uuid[568]: Primary Header is updated. Apr 21 10:38:09.780691 disk-uuid[568]: Secondary Entries is updated. Apr 21 10:38:09.780691 disk-uuid[568]: Secondary Header is updated. Apr 21 10:38:09.785451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:38:09.787541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:38:09.791538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:38:09.800033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:38:10.041554 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:38:10.041655 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:38:10.044523 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:38:10.044537 kernel: ata3.00: applying bridge limits Apr 21 10:38:10.044546 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:38:10.047563 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:38:10.047601 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:38:10.048539 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:38:10.049572 kernel: ata3.00: configured for UDMA/100 Apr 21 10:38:10.052573 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:38:10.095768 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:38:10.095971 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:38:10.110563 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:38:10.792688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:38:10.792736 disk-uuid[569]: The operation has completed successfully. Apr 21 10:38:10.811283 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:38:10.811382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:38:10.827739 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:38:10.831589 sh[597]: Success Apr 21 10:38:10.841543 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:38:10.866912 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:38:10.876742 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:38:10.878415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:38:10.896558 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:38:10.896589 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:38:10.896598 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:38:10.898218 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:38:10.899440 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:38:10.904585 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:38:10.905583 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:38:10.918682 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:38:10.920856 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:38:10.929801 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:38:10.929827 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:38:10.929834 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:38:10.933551 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:38:10.940613 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:38:10.943687 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:38:10.949982 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:38:10.954689 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:38:10.995044 ignition[689]: Ignition 2.19.0 Apr 21 10:38:10.995060 ignition[689]: Stage: fetch-offline Apr 21 10:38:10.995086 ignition[689]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:38:10.995092 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:38:10.995153 ignition[689]: parsed url from cmdline: "" Apr 21 10:38:10.995155 ignition[689]: no config URL provided Apr 21 10:38:10.995159 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:38:10.995164 ignition[689]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:38:10.995184 ignition[689]: op(1): [started] loading QEMU firmware config module Apr 21 10:38:10.995188 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:38:11.001084 ignition[689]: op(1): [finished] loading QEMU firmware config module Apr 21 10:38:11.028377 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:38:11.040688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:38:11.058851 systemd-networkd[785]: lo: Link UP Apr 21 10:38:11.058872 systemd-networkd[785]: lo: Gained carrier Apr 21 10:38:11.059715 systemd-networkd[785]: Enumeration completed Apr 21 10:38:11.060230 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:38:11.060232 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:38:11.060888 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:38:11.061914 systemd-networkd[785]: eth0: Link UP Apr 21 10:38:11.061916 systemd-networkd[785]: eth0: Gained carrier Apr 21 10:38:11.061921 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:38:11.062940 systemd[1]: Reached target network.target - Network. Apr 21 10:38:11.096586 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:38:11.151060 ignition[689]: parsing config with SHA512: 108102de10f682ab74156aea3d806e11a6b82e6ba41084ccdad23b0aa2bf6f9ca16e23f3e05e78749935ece23e87d8c8f3c833dcc01189941028280ef4be16db Apr 21 10:38:11.154868 unknown[689]: fetched base config from "system" Apr 21 10:38:11.155152 ignition[689]: fetch-offline: fetch-offline passed Apr 21 10:38:11.154877 unknown[689]: fetched user config from "qemu" Apr 21 10:38:11.155197 ignition[689]: Ignition finished successfully Apr 21 10:38:11.160382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:38:11.164578 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:38:11.176704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:38:11.191415 ignition[789]: Ignition 2.19.0 Apr 21 10:38:11.191433 ignition[789]: Stage: kargs Apr 21 10:38:11.191598 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:38:11.191604 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:38:11.192195 ignition[789]: kargs: kargs passed Apr 21 10:38:11.192224 ignition[789]: Ignition finished successfully Apr 21 10:38:11.198916 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:38:11.213684 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:38:11.224314 ignition[798]: Ignition 2.19.0 Apr 21 10:38:11.224332 ignition[798]: Stage: disks Apr 21 10:38:11.224450 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:38:11.224457 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:38:11.225112 ignition[798]: disks: disks passed Apr 21 10:38:11.225143 ignition[798]: Ignition finished successfully Apr 21 10:38:11.229565 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:38:11.232106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:38:11.234607 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:38:11.237896 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:38:11.241048 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:38:11.247182 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:38:11.258678 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:38:11.271918 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:38:11.275161 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:38:11.281699 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:38:11.355549 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:38:11.355920 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:38:11.357190 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:38:11.370603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:38:11.374001 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:38:11.378196 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 21 10:38:11.375192 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:38:11.375222 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:38:11.388121 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:38:11.388134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:38:11.388142 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:38:11.375239 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:38:11.393293 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:38:11.394572 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:38:11.398316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:38:11.409680 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:38:11.438298 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:38:11.443036 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:38:11.447659 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:38:11.452193 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:38:11.516391 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:38:11.524919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:38:11.528679 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:38:11.534553 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:38:11.545867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:38:11.556468 ignition[932]: INFO : Ignition 2.19.0 Apr 21 10:38:11.556468 ignition[932]: INFO : Stage: mount Apr 21 10:38:11.558885 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:38:11.558885 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:38:11.558885 ignition[932]: INFO : mount: mount passed Apr 21 10:38:11.558885 ignition[932]: INFO : Ignition finished successfully Apr 21 10:38:11.563711 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:38:11.577735 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:38:11.894257 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:38:11.910786 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:38:11.918817 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 21 10:38:11.918841 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:38:11.918850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:38:11.921247 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:38:11.924541 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:38:11.925589 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:38:11.944564 ignition[961]: INFO : Ignition 2.19.0 Apr 21 10:38:11.944564 ignition[961]: INFO : Stage: files Apr 21 10:38:11.944564 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:38:11.944564 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:38:11.950976 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:38:11.950976 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:38:11.950976 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:38:11.950976 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:38:11.950976 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:38:11.961608 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:38:11.961608 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:38:11.961608 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:38:11.951084 unknown[961]: wrote ssh authorized keys file for user: core Apr 21 10:38:12.057732 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:38:12.120730 systemd-networkd[785]: eth0: Gained IPv6LL Apr 21 10:38:12.335844 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:38:12.335844 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:38:12.341703 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:38:12.615047 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:38:12.802147 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:38:12.802147 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:38:12.807918 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:38:12.831057 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:38:12.831057 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:38:12.831057 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:38:12.831057 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:38:12.831057 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:38:12.831057 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:38:12.831057 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:38:12.831057 ignition[961]: INFO : files: files passed Apr 21 10:38:12.831057 ignition[961]: INFO : Ignition finished successfully Apr 21 10:38:12.825303 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:38:12.838745 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:38:12.843074 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:38:12.846652 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:38:12.863031 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:38:12.846749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:38:12.868403 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:38:12.868403 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:38:12.862638 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:38:12.877840 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:38:12.868085 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:38:12.876700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:38:12.898783 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:38:12.898877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:38:12.899654 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:38:12.902951 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:38:12.906416 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:38:12.907025 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:38:12.922570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:38:12.937800 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:38:12.948811 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:38:12.950013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:38:12.953379 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:38:12.957003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:38:12.957116 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:38:12.962486 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:38:12.963336 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:38:12.968078 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:38:12.970482 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:38:12.974130 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:38:12.977335 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:38:12.980958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:38:12.984076 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:38:12.987886 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:38:12.990975 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:38:12.993944 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:38:12.994067 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:38:12.999004 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:38:13.001840 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:38:13.005220 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:38:13.008561 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:38:13.009239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:38:13.009334 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:38:13.015717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:38:13.015814 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:38:13.019060 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:38:13.021887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:38:13.027653 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:38:13.028397 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:38:13.032604 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:38:13.035258 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:38:13.035345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:38:13.037997 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:38:13.038065 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:38:13.040998 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:38:13.041092 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:38:13.043919 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:38:13.043998 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:38:13.057758 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:38:13.058458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:38:13.058602 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:38:13.062739 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:38:13.067139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:38:13.073912 ignition[1015]: INFO : Ignition 2.19.0 Apr 21 10:38:13.073912 ignition[1015]: INFO : Stage: umount Apr 21 10:38:13.067250 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:38:13.079995 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:38:13.079995 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:38:13.079995 ignition[1015]: INFO : umount: umount passed Apr 21 10:38:13.079995 ignition[1015]: INFO : Ignition finished successfully Apr 21 10:38:13.067457 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:38:13.067553 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:38:13.074986 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:38:13.075062 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:38:13.079845 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:38:13.082323 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:38:13.082467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:38:13.083445 systemd[1]: Stopped target network.target - Network. Apr 21 10:38:13.083840 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:38:13.083871 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:38:13.086588 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:38:13.086630 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:38:13.090074 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:38:13.090112 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:38:13.094993 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:38:13.095032 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:38:13.098256 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:38:13.101265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:38:13.115566 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:38:13.115713 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:38:13.117755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:38:13.117799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:38:13.120571 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 21 10:38:13.123502 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:38:13.123616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:38:13.126747 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:38:13.126784 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:38:13.136625 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:38:13.139021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:38:13.139061 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:38:13.142555 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:38:13.142585 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:38:13.146025 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:38:13.146053 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:38:13.148928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:38:13.152197 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:38:13.152280 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:38:13.158198 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:38:13.158244 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:38:13.176439 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:38:13.176568 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:38:13.184144 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:38:13.184273 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:38:13.187724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:38:13.187754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:38:13.191026 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:38:13.191048 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:38:13.194171 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:38:13.194204 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:38:13.198810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:38:13.198842 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:38:13.203320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:38:13.203354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:38:13.221753 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:38:13.223684 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:38:13.223730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:38:13.225761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:38:13.225791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:38:13.229488 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:38:13.229605 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:38:13.232989 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:38:13.236804 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:38:13.246410 systemd[1]: Switching root. Apr 21 10:38:13.269027 systemd-journald[193]: Journal stopped Apr 21 10:38:13.951435 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 21 10:38:13.951485 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:38:13.951497 kernel: SELinux: policy capability open_perms=1 Apr 21 10:38:13.951556 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:38:13.951565 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:38:13.951574 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:38:13.951583 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:38:13.951592 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:38:13.951614 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:38:13.951623 kernel: audit: type=1403 audit(1776767893.373:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:38:13.951632 systemd[1]: Successfully loaded SELinux policy in 33.151ms. Apr 21 10:38:13.951649 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.663ms. Apr 21 10:38:13.951658 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:38:13.951666 systemd[1]: Detected virtualization kvm. Apr 21 10:38:13.951676 systemd[1]: Detected architecture x86-64. Apr 21 10:38:13.951684 systemd[1]: Detected first boot. Apr 21 10:38:13.951719 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:38:13.951732 zram_generator::config[1059]: No configuration found. Apr 21 10:38:13.951745 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:38:13.951752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:38:13.951760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:38:13.951769 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:38:13.951779 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:38:13.951787 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:38:13.951795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:38:13.951803 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:38:13.951811 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:38:13.951820 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:38:13.951828 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:38:13.951836 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:38:13.951845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:38:13.951853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:38:13.951861 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:38:13.951868 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:38:13.951876 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:38:13.951884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:38:13.951892 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:38:13.951899 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:38:13.951907 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:38:13.951915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:38:13.951924 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:38:13.951933 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:38:13.951940 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:38:13.951948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:38:13.951956 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:38:13.951963 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:38:13.951971 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:38:13.951981 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:38:13.951989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:38:13.951997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:38:13.952004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:38:13.952011 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:38:13.952019 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:38:13.952026 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:38:13.952034 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:38:13.952042 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:38:13.952051 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:38:13.952058 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:38:13.952066 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:38:13.952074 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:38:13.952082 systemd[1]: Reached target machines.target - Containers. Apr 21 10:38:13.952089 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:38:13.952097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:38:13.952104 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:38:13.952112 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:38:13.952121 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:38:13.952129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:38:13.952136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:38:13.952143 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:38:13.952152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:38:13.952160 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:38:13.952167 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:38:13.952174 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:38:13.952183 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:38:13.952191 kernel: fuse: init (API version 7.39) Apr 21 10:38:13.952198 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:38:13.952206 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:38:13.952214 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:38:13.952221 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:38:13.952228 kernel: ACPI: bus type drm_connector registered Apr 21 10:38:13.952236 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:38:13.952256 systemd-journald[1143]: Collecting audit messages is disabled. Apr 21 10:38:13.952274 kernel: loop: module loaded Apr 21 10:38:13.952282 systemd-journald[1143]: Journal started Apr 21 10:38:13.952298 systemd-journald[1143]: Runtime Journal (/run/log/journal/2e0a3fe450bd48d29d4f64a47345cc10) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:38:13.689985 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:38:13.707787 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:38:13.708138 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:38:13.955932 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:38:13.958555 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:38:13.958594 systemd[1]: Stopped verity-setup.service. Apr 21 10:38:13.964556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:38:13.966573 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:38:13.968135 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:38:13.969842 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:38:13.971651 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:38:13.973271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:38:13.975044 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:38:13.976851 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:38:13.978560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:38:13.980675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:38:13.982894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:38:13.983020 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:38:13.985034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:38:13.985157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:38:13.987089 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:38:13.987319 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:38:13.989156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:38:13.989281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:38:13.991367 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:38:13.991488 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:38:13.993557 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:38:13.993669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:38:13.995667 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:38:13.997714 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:38:14.000041 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:38:14.002205 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:38:14.011352 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:38:14.023633 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:38:14.026475 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:38:14.028259 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:38:14.028296 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:38:14.030560 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:38:14.033241 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:38:14.035768 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:38:14.037423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:38:14.038678 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:38:14.041234 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:38:14.043265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:38:14.044315 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:38:14.046210 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:38:14.046932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:38:14.047848 systemd-journald[1143]: Time spent on flushing to /var/log/journal/2e0a3fe450bd48d29d4f64a47345cc10 is 11.919ms for 994 entries. Apr 21 10:38:14.047848 systemd-journald[1143]: System Journal (/var/log/journal/2e0a3fe450bd48d29d4f64a47345cc10) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:38:14.068689 systemd-journald[1143]: Received client request to flush runtime journal. Apr 21 10:38:14.056117 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:38:14.058923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:38:14.065395 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:38:14.068500 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:38:14.070825 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:38:14.073111 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:38:14.075309 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:38:14.077971 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:38:14.079810 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:38:14.082243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:38:14.091307 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:38:14.094833 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:38:14.094897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:38:14.098557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:38:14.103726 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:38:14.106447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:38:14.116607 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:38:14.117093 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:38:14.129571 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:38:14.131801 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 21 10:38:14.131823 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 21 10:38:14.136975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:38:14.165548 kernel: loop2: detected capacity change from 0 to 217752 Apr 21 10:38:14.196572 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:38:14.211557 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 10:38:14.221550 kernel: loop5: detected capacity change from 0 to 217752 Apr 21 10:38:14.227727 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:38:14.228040 (sd-merge)[1197]: Merged extensions into '/usr'. Apr 21 10:38:14.230825 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:38:14.230849 systemd[1]: Reloading... Apr 21 10:38:14.272662 zram_generator::config[1223]: No configuration found. Apr 21 10:38:14.303359 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:38:14.357009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:38:14.386309 systemd[1]: Reloading finished in 155 ms. Apr 21 10:38:14.421962 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:38:14.424248 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:38:14.426357 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:38:14.438820 systemd[1]: Starting ensure-sysext.service... Apr 21 10:38:14.441260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:38:14.444121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:38:14.447692 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:38:14.447741 systemd[1]: Reloading... Apr 21 10:38:14.455349 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:38:14.455854 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:38:14.456332 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:38:14.456484 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 21 10:38:14.456668 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 21 10:38:14.458305 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:38:14.458362 systemd-tmpfiles[1263]: Skipping /boot Apr 21 10:38:14.466208 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Apr 21 10:38:14.467013 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:38:14.467017 systemd-tmpfiles[1263]: Skipping /boot Apr 21 10:38:14.477554 zram_generator::config[1289]: No configuration found. Apr 21 10:38:14.524568 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1322) Apr 21 10:38:14.552548 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:38:14.556659 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:38:14.560282 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:38:14.560469 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:38:14.560621 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:38:14.561931 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:38:14.566824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:38:14.578578 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:38:14.616596 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:38:14.617336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:38:14.619675 systemd[1]: Reloading finished in 171 ms. Apr 21 10:38:14.652446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:38:14.665560 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:38:14.669994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:38:14.702121 systemd[1]: Finished ensure-sysext.service. Apr 21 10:38:14.726451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:38:14.736680 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:38:14.739766 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:38:14.741754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:38:14.742644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:38:14.747664 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:38:14.750213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:38:14.753197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:38:14.755253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:38:14.756422 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:38:14.759209 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:38:14.765100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:38:14.770466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:38:14.773304 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:38:14.780659 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:38:14.785099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:38:14.785770 augenrules[1388]: No rules Apr 21 10:38:14.786857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:38:14.787457 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:38:14.789943 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:38:14.792153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:38:14.792324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:38:14.803856 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:38:14.806485 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:38:14.806647 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:38:14.808772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:38:14.808879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:38:14.811184 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:38:14.811369 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:38:14.813359 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:38:14.815648 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:38:14.827146 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:38:14.835805 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:38:14.836890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:38:14.836971 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:38:14.838164 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:38:14.842997 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:38:14.846756 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:38:14.847173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:38:14.850553 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:38:14.858837 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:38:14.871077 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:38:14.873886 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:38:14.888759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:38:14.891681 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:38:14.898457 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:38:14.919885 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:38:14.922955 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:38:14.925313 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:38:14.927492 systemd-networkd[1378]: lo: Link UP Apr 21 10:38:14.927540 systemd-networkd[1378]: lo: Gained carrier Apr 21 10:38:14.928332 systemd-networkd[1378]: Enumeration completed Apr 21 10:38:14.928431 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:38:14.929357 systemd-resolved[1381]: Positive Trust Anchors: Apr 21 10:38:14.929365 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:38:14.929390 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:38:14.930636 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:38:14.930638 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:38:14.931254 systemd-networkd[1378]: eth0: Link UP Apr 21 10:38:14.931269 systemd-networkd[1378]: eth0: Gained carrier Apr 21 10:38:14.931278 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:38:14.932200 systemd-resolved[1381]: Defaulting to hostname 'linux'. Apr 21 10:38:14.940670 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:38:14.942993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:38:14.944964 systemd[1]: Reached target network.target - Network. Apr 21 10:38:14.945599 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:38:14.946451 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:38:14.946677 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 21 10:38:15.577838 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:38:15.577864 systemd-timesyncd[1382]: Initial clock synchronization to Tue 2026-04-21 10:38:15.577744 UTC. Apr 21 10:38:15.577889 systemd-resolved[1381]: Clock change detected. Flushing caches. Apr 21 10:38:15.578652 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:38:15.580361 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:38:15.582298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:38:15.584361 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:38:15.586097 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:38:15.588082 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:38:15.590021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:38:15.590057 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:38:15.591443 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:38:15.593478 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:38:15.596296 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:38:15.605501 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:38:15.608008 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:38:15.610065 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:38:15.611643 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:38:15.613129 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:38:15.613159 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:38:15.613994 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:38:15.616416 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:38:15.618478 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:38:15.621706 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:38:15.623397 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:38:15.625568 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:38:15.628141 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:38:15.630698 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:38:15.633020 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:38:15.634545 jq[1430]: false Apr 21 10:38:15.637943 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:38:15.640164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:38:15.640425 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:38:15.641944 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:38:15.643306 extend-filesystems[1431]: Found loop3 Apr 21 10:38:15.644918 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:38:15.647323 extend-filesystems[1431]: Found loop4 Apr 21 10:38:15.648428 extend-filesystems[1431]: Found loop5 Apr 21 10:38:15.648428 extend-filesystems[1431]: Found sr0 Apr 21 10:38:15.648428 extend-filesystems[1431]: Found vda Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda1 Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda2 Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda3 Apr 21 10:38:15.655142 extend-filesystems[1431]: Found usr Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda4 Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda6 Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda7 Apr 21 10:38:15.655142 extend-filesystems[1431]: Found vda9 Apr 21 10:38:15.655142 extend-filesystems[1431]: Checking size of /dev/vda9 Apr 21 10:38:15.662578 update_engine[1443]: I20260421 10:38:15.653064 1443 main.cc:92] Flatcar Update Engine starting Apr 21 10:38:15.652862 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:38:15.653026 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:38:15.663098 jq[1444]: true Apr 21 10:38:15.653820 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:38:15.663584 jq[1450]: true Apr 21 10:38:15.653948 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:38:15.667065 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:38:15.667188 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:38:15.671904 extend-filesystems[1431]: Resized partition /dev/vda9 Apr 21 10:38:15.673347 tar[1448]: linux-amd64/LICENSE Apr 21 10:38:15.673347 tar[1448]: linux-amd64/helm Apr 21 10:38:15.674223 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:38:15.678811 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:38:15.679722 dbus-daemon[1429]: [system] SELinux support is enabled Apr 21 10:38:15.681995 update_engine[1443]: I20260421 10:38:15.681633 1443 update_check_scheduler.cc:74] Next update check in 8m32s Apr 21 10:38:15.681937 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:38:15.682349 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:38:15.686531 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:38:15.686552 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:38:15.689058 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:38:15.689075 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:38:15.692272 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:38:15.703895 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:38:15.709000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1308) Apr 21 10:38:15.713863 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:38:15.732192 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:38:15.732935 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:38:15.733511 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:38:15.733511 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:38:15.733511 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:38:15.741558 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Apr 21 10:38:15.733581 systemd-logind[1440]: New seat seat0. Apr 21 10:38:15.751981 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:38:15.737195 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:38:15.737331 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:38:15.740610 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:38:15.744574 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:38:15.748290 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:38:15.750722 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:38:15.847413 containerd[1462]: time="2026-04-21T10:38:15.847167944Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:38:15.864405 containerd[1462]: time="2026-04-21T10:38:15.864356836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.865897 containerd[1462]: time="2026-04-21T10:38:15.865871346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.865948876Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866045217Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866147345Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866157980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866192540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866201358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866334626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866344970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866353259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866376875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866425084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.866787 containerd[1462]: time="2026-04-21T10:38:15.866554379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:38:15.867020 containerd[1462]: time="2026-04-21T10:38:15.866653006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:38:15.867020 containerd[1462]: time="2026-04-21T10:38:15.866662045Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:38:15.867020 containerd[1462]: time="2026-04-21T10:38:15.866707825Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:38:15.867020 containerd[1462]: time="2026-04-21T10:38:15.866736168Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:38:15.872272 containerd[1462]: time="2026-04-21T10:38:15.872221676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:38:15.872314 containerd[1462]: time="2026-04-21T10:38:15.872282378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:38:15.872314 containerd[1462]: time="2026-04-21T10:38:15.872300867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:38:15.872377 containerd[1462]: time="2026-04-21T10:38:15.872313237Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:38:15.872377 containerd[1462]: time="2026-04-21T10:38:15.872371391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:38:15.872504 containerd[1462]: time="2026-04-21T10:38:15.872460346Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:38:15.872826 containerd[1462]: time="2026-04-21T10:38:15.872781048Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:38:15.872911 containerd[1462]: time="2026-04-21T10:38:15.872868747Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:38:15.872911 containerd[1462]: time="2026-04-21T10:38:15.872895932Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:38:15.872911 containerd[1462]: time="2026-04-21T10:38:15.872904988Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:38:15.872979 containerd[1462]: time="2026-04-21T10:38:15.872913759Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.872979 containerd[1462]: time="2026-04-21T10:38:15.872923290Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.872979 containerd[1462]: time="2026-04-21T10:38:15.872931538Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.872979 containerd[1462]: time="2026-04-21T10:38:15.872944380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.873042 containerd[1462]: time="2026-04-21T10:38:15.872985511Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.873042 containerd[1462]: time="2026-04-21T10:38:15.873006819Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.873042 containerd[1462]: time="2026-04-21T10:38:15.873016137Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.873042 containerd[1462]: time="2026-04-21T10:38:15.873025140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:38:15.873042 containerd[1462]: time="2026-04-21T10:38:15.873039484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873052984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873062028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873070626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873079404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873088889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873097083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873112 containerd[1462]: time="2026-04-21T10:38:15.873106129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873115736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873126029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873134634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873142755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873152880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873169954Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873193769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873201216Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:38:15.873265 containerd[1462]: time="2026-04-21T10:38:15.873255493Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873268533Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873276169Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873352286Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873360918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873373722Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873380670Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:38:15.873388 containerd[1462]: time="2026-04-21T10:38:15.873387469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:38:15.873653 containerd[1462]: time="2026-04-21T10:38:15.873584674Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:38:15.873653 containerd[1462]: time="2026-04-21T10:38:15.873641166Z" level=info msg="Connect containerd service" Apr 21 10:38:15.873822 containerd[1462]: time="2026-04-21T10:38:15.873665111Z" level=info msg="using legacy CRI server" Apr 21 10:38:15.873822 containerd[1462]: time="2026-04-21T10:38:15.873670479Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:38:15.873822 containerd[1462]: time="2026-04-21T10:38:15.873799646Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:38:15.874384 containerd[1462]: time="2026-04-21T10:38:15.874335267Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:38:15.874966 containerd[1462]: time="2026-04-21T10:38:15.874887264Z" level=info msg="Start subscribing containerd event" Apr 21 10:38:15.875918 containerd[1462]: time="2026-04-21T10:38:15.875028216Z" level=info msg="Start recovering state" Apr 21 10:38:15.875918 containerd[1462]: time="2026-04-21T10:38:15.875090769Z" level=info msg="Start event monitor" Apr 21 10:38:15.875918 containerd[1462]: time="2026-04-21T10:38:15.875105283Z" level=info msg="Start snapshots syncer" Apr 21 10:38:15.875918 containerd[1462]: time="2026-04-21T10:38:15.875111922Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:38:15.875918 containerd[1462]: time="2026-04-21T10:38:15.875120895Z" level=info msg="Start streaming server" Apr 21 10:38:15.876205 containerd[1462]: time="2026-04-21T10:38:15.876150322Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:38:15.876270 containerd[1462]: time="2026-04-21T10:38:15.876261793Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:38:15.878461 containerd[1462]: time="2026-04-21T10:38:15.876323386Z" level=info msg="containerd successfully booted in 0.029957s" Apr 21 10:38:15.876406 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:38:15.959215 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:38:15.977511 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:38:15.986067 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:38:15.992891 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:38:15.993071 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:38:15.996096 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:38:16.006698 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:38:16.009837 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:38:16.012249 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:38:16.014257 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:38:16.102980 tar[1448]: linux-amd64/README.md Apr 21 10:38:16.120170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:38:16.783313 systemd-networkd[1378]: eth0: Gained IPv6LL Apr 21 10:38:16.785614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:38:16.788086 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:38:16.800031 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:38:16.803064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:16.805536 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:38:16.818918 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:38:16.819110 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:38:16.821200 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:38:16.821667 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:38:17.443603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:17.445894 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:38:17.447101 systemd[1]: Startup finished in 904ms (kernel) + 4.662s (initrd) + 3.475s (userspace) = 9.042s. Apr 21 10:38:17.448040 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:38:17.797356 kubelet[1541]: E0421 10:38:17.797188 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:38:17.799080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:38:17.799254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:38:22.421233 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:38:22.422239 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:59122.service - OpenSSH per-connection server daemon (10.0.0.1:59122). Apr 21 10:38:22.460524 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 59122 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:22.461849 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:22.469119 systemd-logind[1440]: New session 1 of user core. Apr 21 10:38:22.470045 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:38:22.479014 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:38:22.487308 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:38:22.489067 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:38:22.494538 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:38:22.557281 systemd[1559]: Queued start job for default target default.target. Apr 21 10:38:22.569607 systemd[1559]: Created slice app.slice - User Application Slice. Apr 21 10:38:22.569653 systemd[1559]: Reached target paths.target - Paths. Apr 21 10:38:22.569663 systemd[1559]: Reached target timers.target - Timers. Apr 21 10:38:22.570905 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:38:22.580143 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:38:22.580201 systemd[1559]: Reached target sockets.target - Sockets. Apr 21 10:38:22.580210 systemd[1559]: Reached target basic.target - Basic System. Apr 21 10:38:22.580233 systemd[1559]: Reached target default.target - Main User Target. Apr 21 10:38:22.580252 systemd[1559]: Startup finished in 81ms. Apr 21 10:38:22.580573 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:38:22.581790 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:38:22.640195 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:59124.service - OpenSSH per-connection server daemon (10.0.0.1:59124). Apr 21 10:38:22.667742 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 59124 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:22.668802 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:22.672194 systemd-logind[1440]: New session 2 of user core. Apr 21 10:38:22.682921 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:38:22.734651 sshd[1570]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:22.742712 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:59124.service: Deactivated successfully. Apr 21 10:38:22.743736 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:38:22.744745 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:38:22.745629 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:59134.service - OpenSSH per-connection server daemon (10.0.0.1:59134). Apr 21 10:38:22.746154 systemd-logind[1440]: Removed session 2. Apr 21 10:38:22.773406 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 59134 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:22.774359 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:22.777413 systemd-logind[1440]: New session 3 of user core. Apr 21 10:38:22.785921 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:38:22.832368 sshd[1577]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:22.847656 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:59134.service: Deactivated successfully. Apr 21 10:38:22.848682 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:38:22.849585 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:38:22.850487 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:59136.service - OpenSSH per-connection server daemon (10.0.0.1:59136). Apr 21 10:38:22.851072 systemd-logind[1440]: Removed session 3. Apr 21 10:38:22.878164 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 59136 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:22.879201 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:22.882262 systemd-logind[1440]: New session 4 of user core. Apr 21 10:38:22.892912 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:38:22.942726 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:22.951527 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:59136.service: Deactivated successfully. Apr 21 10:38:22.952550 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:38:22.953484 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:38:22.954406 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:59146.service - OpenSSH per-connection server daemon (10.0.0.1:59146). Apr 21 10:38:22.954944 systemd-logind[1440]: Removed session 4. Apr 21 10:38:22.981192 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 59146 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:22.982239 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:22.985517 systemd-logind[1440]: New session 5 of user core. Apr 21 10:38:23.002890 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:38:23.056514 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:38:23.056719 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:38:23.071928 sudo[1594]: pam_unix(sudo:session): session closed for user root Apr 21 10:38:23.073480 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:23.098946 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:59146.service: Deactivated successfully. Apr 21 10:38:23.100064 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:38:23.101066 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:38:23.102027 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:59162.service - OpenSSH per-connection server daemon (10.0.0.1:59162). Apr 21 10:38:23.102549 systemd-logind[1440]: Removed session 5. Apr 21 10:38:23.129713 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 59162 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:23.130730 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:23.133929 systemd-logind[1440]: New session 6 of user core. Apr 21 10:38:23.139921 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:38:23.189942 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:38:23.190168 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:38:23.193033 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 21 10:38:23.196947 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:38:23.197165 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:38:23.214021 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:38:23.215518 auditctl[1606]: No rules Apr 21 10:38:23.215849 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:38:23.216086 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:38:23.217749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:38:23.240661 augenrules[1624]: No rules Apr 21 10:38:23.241606 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:38:23.242279 sudo[1602]: pam_unix(sudo:session): session closed for user root Apr 21 10:38:23.243510 sshd[1599]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:23.251652 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:59162.service: Deactivated successfully. Apr 21 10:38:23.252737 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:38:23.253658 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:38:23.254561 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:59170.service - OpenSSH per-connection server daemon (10.0.0.1:59170). Apr 21 10:38:23.255140 systemd-logind[1440]: Removed session 6. Apr 21 10:38:23.282009 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 59170 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:38:23.283170 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:38:23.286357 systemd-logind[1440]: New session 7 of user core. Apr 21 10:38:23.301115 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:38:23.351534 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:38:23.351797 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:38:23.578814 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:38:23.578930 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:38:23.796316 dockerd[1655]: time="2026-04-21T10:38:23.795924123Z" level=info msg="Starting up" Apr 21 10:38:23.906394 dockerd[1655]: time="2026-04-21T10:38:23.906268595Z" level=info msg="Loading containers: start." Apr 21 10:38:24.001791 kernel: Initializing XFRM netlink socket Apr 21 10:38:24.070348 systemd-networkd[1378]: docker0: Link UP Apr 21 10:38:24.098215 dockerd[1655]: time="2026-04-21T10:38:24.098168087Z" level=info msg="Loading containers: done." Apr 21 10:38:24.111365 dockerd[1655]: time="2026-04-21T10:38:24.111310354Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:38:24.111454 dockerd[1655]: time="2026-04-21T10:38:24.111407630Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:38:24.111499 dockerd[1655]: time="2026-04-21T10:38:24.111474488Z" level=info msg="Daemon has completed initialization" Apr 21 10:38:24.137530 dockerd[1655]: time="2026-04-21T10:38:24.137466972Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:38:24.137652 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:38:24.499167 containerd[1462]: time="2026-04-21T10:38:24.499130654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:38:25.280643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431317709.mount: Deactivated successfully. Apr 21 10:38:25.883900 containerd[1462]: time="2026-04-21T10:38:25.883829822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:25.884394 containerd[1462]: time="2026-04-21T10:38:25.884342457Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 21 10:38:25.885553 containerd[1462]: time="2026-04-21T10:38:25.885513751Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:25.888020 containerd[1462]: time="2026-04-21T10:38:25.887974702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:25.888922 containerd[1462]: time="2026-04-21T10:38:25.888895923Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.389722721s" Apr 21 10:38:25.888922 containerd[1462]: time="2026-04-21T10:38:25.888918488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:38:25.889584 containerd[1462]: time="2026-04-21T10:38:25.889558085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:38:26.578812 containerd[1462]: time="2026-04-21T10:38:26.578669211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:26.579349 containerd[1462]: time="2026-04-21T10:38:26.579295322Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 21 10:38:26.580061 containerd[1462]: time="2026-04-21T10:38:26.579993073Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:26.582405 containerd[1462]: time="2026-04-21T10:38:26.582355102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:26.583369 containerd[1462]: time="2026-04-21T10:38:26.583300053Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 693.705766ms" Apr 21 10:38:26.583482 containerd[1462]: time="2026-04-21T10:38:26.583404036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:38:26.584017 containerd[1462]: time="2026-04-21T10:38:26.583973392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:38:27.225594 containerd[1462]: time="2026-04-21T10:38:27.225529691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:27.226277 containerd[1462]: time="2026-04-21T10:38:27.226209852Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 21 10:38:27.226994 containerd[1462]: time="2026-04-21T10:38:27.226932493Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:27.229260 containerd[1462]: time="2026-04-21T10:38:27.229175751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:27.230570 containerd[1462]: time="2026-04-21T10:38:27.230536615Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 646.509639ms" Apr 21 10:38:27.230634 containerd[1462]: time="2026-04-21T10:38:27.230579210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:38:27.231203 containerd[1462]: time="2026-04-21T10:38:27.231155222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:38:27.932103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247381820.mount: Deactivated successfully. Apr 21 10:38:27.932875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:38:27.942960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:28.032712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:28.035904 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:38:28.071245 kubelet[1887]: E0421 10:38:28.071117 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:38:28.073674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:38:28.073843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:38:28.179548 containerd[1462]: time="2026-04-21T10:38:28.179468646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:28.180214 containerd[1462]: time="2026-04-21T10:38:28.180168174Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 21 10:38:28.181072 containerd[1462]: time="2026-04-21T10:38:28.181029561Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:28.182626 containerd[1462]: time="2026-04-21T10:38:28.182522280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:28.182946 containerd[1462]: time="2026-04-21T10:38:28.182895897Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 951.707286ms" Apr 21 10:38:28.182976 containerd[1462]: time="2026-04-21T10:38:28.182944697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:38:28.183459 containerd[1462]: time="2026-04-21T10:38:28.183436194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:38:28.545914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142070520.mount: Deactivated successfully. Apr 21 10:38:29.084046 containerd[1462]: time="2026-04-21T10:38:29.083981935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:29.084700 containerd[1462]: time="2026-04-21T10:38:29.084656273Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 21 10:38:29.085662 containerd[1462]: time="2026-04-21T10:38:29.085592763Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:29.088367 containerd[1462]: time="2026-04-21T10:38:29.088308913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:29.089331 containerd[1462]: time="2026-04-21T10:38:29.089291594Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 905.825458ms" Apr 21 10:38:29.089331 containerd[1462]: time="2026-04-21T10:38:29.089324932Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:38:29.089968 containerd[1462]: time="2026-04-21T10:38:29.089927821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:38:29.471511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520577884.mount: Deactivated successfully. Apr 21 10:38:29.478188 containerd[1462]: time="2026-04-21T10:38:29.478105755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:29.478957 containerd[1462]: time="2026-04-21T10:38:29.478895690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 10:38:29.480135 containerd[1462]: time="2026-04-21T10:38:29.480099680Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:29.482959 containerd[1462]: time="2026-04-21T10:38:29.482914565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:29.483854 containerd[1462]: time="2026-04-21T10:38:29.483817740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 393.850206ms" Apr 21 10:38:29.483892 containerd[1462]: time="2026-04-21T10:38:29.483867480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:38:29.484535 containerd[1462]: time="2026-04-21T10:38:29.484491481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:38:29.879177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196493170.mount: Deactivated successfully. Apr 21 10:38:30.410125 containerd[1462]: time="2026-04-21T10:38:30.410062629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:30.410699 containerd[1462]: time="2026-04-21T10:38:30.410664955Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 21 10:38:30.411826 containerd[1462]: time="2026-04-21T10:38:30.411798700Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:30.416582 containerd[1462]: time="2026-04-21T10:38:30.416529496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:30.417373 containerd[1462]: time="2026-04-21T10:38:30.417342711Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 932.805546ms" Apr 21 10:38:30.417411 containerd[1462]: time="2026-04-21T10:38:30.417381846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:38:31.589803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:31.601131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:31.620353 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit session-7.scope)... Apr 21 10:38:31.620378 systemd[1]: Reloading... Apr 21 10:38:31.667817 zram_generator::config[2087]: No configuration found. Apr 21 10:38:31.744058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:38:31.789882 systemd[1]: Reloading finished in 169 ms. Apr 21 10:38:31.834240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:31.836949 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:38:31.837167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:31.838535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:31.949266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:31.953621 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:38:31.993071 kubelet[2137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:38:32.168777 kubelet[2137]: I0421 10:38:32.168693 2137 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:38:32.168777 kubelet[2137]: I0421 10:38:32.168739 2137 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:38:32.168777 kubelet[2137]: I0421 10:38:32.168781 2137 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:38:32.168777 kubelet[2137]: I0421 10:38:32.168786 2137 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:38:32.169001 kubelet[2137]: I0421 10:38:32.168979 2137 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:38:32.190180 kubelet[2137]: E0421 10:38:32.190127 2137 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:38:32.190386 kubelet[2137]: I0421 10:38:32.190346 2137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:38:32.192650 kubelet[2137]: E0421 10:38:32.192589 2137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:38:32.192711 kubelet[2137]: I0421 10:38:32.192666 2137 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:38:32.195884 kubelet[2137]: I0421 10:38:32.195826 2137 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:38:32.197306 kubelet[2137]: I0421 10:38:32.197263 2137 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:38:32.197440 kubelet[2137]: I0421 10:38:32.197301 2137 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:38:32.197531 kubelet[2137]: I0421 10:38:32.197447 2137 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:38:32.197531 kubelet[2137]: I0421 10:38:32.197453 2137 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:38:32.197566 kubelet[2137]: I0421 10:38:32.197534 2137 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:38:32.200099 kubelet[2137]: I0421 10:38:32.200008 2137 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:38:32.200203 kubelet[2137]: I0421 10:38:32.200185 2137 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:38:32.200224 kubelet[2137]: I0421 10:38:32.200210 2137 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:38:32.200241 kubelet[2137]: I0421 10:38:32.200229 2137 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:38:32.200241 kubelet[2137]: I0421 10:38:32.200237 2137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:38:32.204061 kubelet[2137]: I0421 10:38:32.204009 2137 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:38:32.207862 kubelet[2137]: I0421 10:38:32.207825 2137 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:38:32.207909 kubelet[2137]: I0421 10:38:32.207879 2137 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:38:32.207956 kubelet[2137]: W0421 10:38:32.207925 2137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:38:32.209657 kubelet[2137]: I0421 10:38:32.209625 2137 server.go:1257] "Started kubelet" Apr 21 10:38:32.210017 kubelet[2137]: I0421 10:38:32.209922 2137 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:38:32.210074 kubelet[2137]: I0421 10:38:32.210020 2137 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:38:32.210264 kubelet[2137]: I0421 10:38:32.210238 2137 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:38:32.211377 kubelet[2137]: I0421 10:38:32.210309 2137 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:38:32.211377 kubelet[2137]: I0421 10:38:32.211258 2137 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:38:32.212121 kubelet[2137]: I0421 10:38:32.211671 2137 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:38:32.213248 kubelet[2137]: E0421 10:38:32.212881 2137 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:38:32.213248 kubelet[2137]: I0421 10:38:32.213068 2137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:38:32.213311 kubelet[2137]: E0421 10:38:32.213305 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:32.213329 kubelet[2137]: I0421 10:38:32.213321 2137 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:38:32.213472 kubelet[2137]: I0421 10:38:32.213429 2137 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:38:32.213496 kubelet[2137]: I0421 10:38:32.213491 2137 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:38:32.214220 kubelet[2137]: E0421 10:38:32.214172 2137 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Apr 21 10:38:32.214344 kubelet[2137]: I0421 10:38:32.214322 2137 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:38:32.214533 kubelet[2137]: I0421 10:38:32.214400 2137 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:38:32.214767 kubelet[2137]: E0421 10:38:32.213611 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a859009f90a749 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:38:32.209590089 +0000 UTC m=+0.252479745,LastTimestamp:2026-04-21 10:38:32.209590089 +0000 UTC m=+0.252479745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:38:32.215461 kubelet[2137]: I0421 10:38:32.215414 2137 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:38:32.225956 kubelet[2137]: I0421 10:38:32.225912 2137 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:38:32.225956 kubelet[2137]: I0421 10:38:32.225937 2137 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:38:32.225956 kubelet[2137]: I0421 10:38:32.225950 2137 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:38:32.229237 kubelet[2137]: I0421 10:38:32.229149 2137 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:38:32.230256 kubelet[2137]: I0421 10:38:32.230211 2137 policy_none.go:50] "Start" Apr 21 10:38:32.230256 kubelet[2137]: I0421 10:38:32.230230 2137 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:38:32.230256 kubelet[2137]: I0421 10:38:32.230241 2137 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:38:32.230256 kubelet[2137]: I0421 10:38:32.230245 2137 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:38:32.230256 kubelet[2137]: I0421 10:38:32.230251 2137 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:38:32.230256 kubelet[2137]: I0421 10:38:32.230261 2137 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:38:32.230384 kubelet[2137]: E0421 10:38:32.230295 2137 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:38:32.232005 kubelet[2137]: I0421 10:38:32.231971 2137 policy_none.go:44] "Start" Apr 21 10:38:32.235246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:38:32.250127 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:38:32.263310 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:38:32.264477 kubelet[2137]: E0421 10:38:32.264437 2137 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:38:32.264616 kubelet[2137]: I0421 10:38:32.264587 2137 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:38:32.264637 kubelet[2137]: I0421 10:38:32.264612 2137 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:38:32.264844 kubelet[2137]: I0421 10:38:32.264822 2137 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:38:32.265847 kubelet[2137]: E0421 10:38:32.265825 2137 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:38:32.265892 kubelet[2137]: E0421 10:38:32.265861 2137 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:38:32.340924 systemd[1]: Created slice kubepods-burstable-pod97a74fc1e15fd757e682914ee4a04e16.slice - libcontainer container kubepods-burstable-pod97a74fc1e15fd757e682914ee4a04e16.slice. Apr 21 10:38:32.364223 kubelet[2137]: E0421 10:38:32.364179 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:32.365661 kubelet[2137]: I0421 10:38:32.365600 2137 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:38:32.365963 kubelet[2137]: E0421 10:38:32.365929 2137 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Apr 21 10:38:32.367054 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 21 10:38:32.368229 kubelet[2137]: E0421 10:38:32.368202 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:32.382497 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 21 10:38:32.384149 kubelet[2137]: E0421 10:38:32.384076 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:32.414982 kubelet[2137]: E0421 10:38:32.414888 2137 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Apr 21 10:38:32.514598 kubelet[2137]: I0421 10:38:32.514402 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:32.514598 kubelet[2137]: I0421 10:38:32.514472 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:32.514598 kubelet[2137]: I0421 10:38:32.514564 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:32.514799 kubelet[2137]: I0421 10:38:32.514608 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:38:32.514799 kubelet[2137]: I0421 10:38:32.514626 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97a74fc1e15fd757e682914ee4a04e16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97a74fc1e15fd757e682914ee4a04e16\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:32.514799 kubelet[2137]: I0421 10:38:32.514643 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:32.514799 kubelet[2137]: I0421 10:38:32.514660 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97a74fc1e15fd757e682914ee4a04e16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97a74fc1e15fd757e682914ee4a04e16\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:32.514799 kubelet[2137]: I0421 10:38:32.514673 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97a74fc1e15fd757e682914ee4a04e16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97a74fc1e15fd757e682914ee4a04e16\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:32.514903 kubelet[2137]: I0421 10:38:32.514687 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:32.568082 kubelet[2137]: I0421 10:38:32.567982 2137 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:38:32.568398 kubelet[2137]: E0421 10:38:32.568361 2137 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Apr 21 10:38:32.667885 kubelet[2137]: E0421 10:38:32.667838 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:32.668862 containerd[1462]: time="2026-04-21T10:38:32.668693098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97a74fc1e15fd757e682914ee4a04e16,Namespace:kube-system,Attempt:0,}" Apr 21 10:38:32.669931 kubelet[2137]: E0421 10:38:32.669871 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:32.670258 containerd[1462]: time="2026-04-21T10:38:32.670224426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 21 10:38:32.686927 kubelet[2137]: E0421 10:38:32.686814 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:32.687589 containerd[1462]: time="2026-04-21T10:38:32.687335302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 21 10:38:32.816148 kubelet[2137]: E0421 10:38:32.815983 2137 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Apr 21 10:38:32.970003 kubelet[2137]: I0421 10:38:32.969949 2137 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:38:32.970327 kubelet[2137]: E0421 10:38:32.970265 2137 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Apr 21 10:38:33.027652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372985389.mount: Deactivated successfully. Apr 21 10:38:33.033940 containerd[1462]: time="2026-04-21T10:38:33.033869233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:38:33.035565 containerd[1462]: time="2026-04-21T10:38:33.035483000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:38:33.036344 containerd[1462]: time="2026-04-21T10:38:33.036311492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:38:33.037261 containerd[1462]: time="2026-04-21T10:38:33.037227487Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:38:33.038115 containerd[1462]: time="2026-04-21T10:38:33.038069923Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:38:33.038636 containerd[1462]: time="2026-04-21T10:38:33.038548807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:38:33.039517 containerd[1462]: time="2026-04-21T10:38:33.039485074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:38:33.040411 containerd[1462]: time="2026-04-21T10:38:33.040373309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:38:33.040975 containerd[1462]: time="2026-04-21T10:38:33.040947747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 372.133503ms" Apr 21 10:38:33.042930 containerd[1462]: time="2026-04-21T10:38:33.042898930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 355.503532ms" Apr 21 10:38:33.043835 containerd[1462]: time="2026-04-21T10:38:33.043687794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 373.403583ms" Apr 21 10:38:33.143183 containerd[1462]: time="2026-04-21T10:38:33.142995961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:33.143397 containerd[1462]: time="2026-04-21T10:38:33.143327720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:33.143397 containerd[1462]: time="2026-04-21T10:38:33.143355375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:33.143446 containerd[1462]: time="2026-04-21T10:38:33.143401369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:33.143911 containerd[1462]: time="2026-04-21T10:38:33.143671492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:33.143911 containerd[1462]: time="2026-04-21T10:38:33.143691694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:33.143911 containerd[1462]: time="2026-04-21T10:38:33.143744320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:33.143911 containerd[1462]: time="2026-04-21T10:38:33.143582868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:33.147328 containerd[1462]: time="2026-04-21T10:38:33.147160105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:33.147328 containerd[1462]: time="2026-04-21T10:38:33.147218993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:33.147328 containerd[1462]: time="2026-04-21T10:38:33.147236618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:33.147328 containerd[1462]: time="2026-04-21T10:38:33.147315196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:33.168946 systemd[1]: Started cri-containerd-cb38db675ef4ebd79108c1ef884ee13cbfed8684f63e969550b7d676aa14d836.scope - libcontainer container cb38db675ef4ebd79108c1ef884ee13cbfed8684f63e969550b7d676aa14d836. Apr 21 10:38:33.170300 systemd[1]: Started cri-containerd-e8b5cb50f105b4cf1eccfff58536e0df318b06081637242415b6d3dc278334a3.scope - libcontainer container e8b5cb50f105b4cf1eccfff58536e0df318b06081637242415b6d3dc278334a3. Apr 21 10:38:33.173363 systemd[1]: Started cri-containerd-fde061d36fd77614ba2944bc8d066cc7626c808724e8e5d209a16e7b3bc21db8.scope - libcontainer container fde061d36fd77614ba2944bc8d066cc7626c808724e8e5d209a16e7b3bc21db8. Apr 21 10:38:33.213621 containerd[1462]: time="2026-04-21T10:38:33.213556029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97a74fc1e15fd757e682914ee4a04e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8b5cb50f105b4cf1eccfff58536e0df318b06081637242415b6d3dc278334a3\"" Apr 21 10:38:33.215992 kubelet[2137]: E0421 10:38:33.215947 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:33.221575 containerd[1462]: time="2026-04-21T10:38:33.221488849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb38db675ef4ebd79108c1ef884ee13cbfed8684f63e969550b7d676aa14d836\"" Apr 21 10:38:33.222406 kubelet[2137]: E0421 10:38:33.222327 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:33.224441 containerd[1462]: time="2026-04-21T10:38:33.224340541Z" level=info msg="CreateContainer within sandbox \"e8b5cb50f105b4cf1eccfff58536e0df318b06081637242415b6d3dc278334a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:38:33.226268 containerd[1462]: time="2026-04-21T10:38:33.226199940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde061d36fd77614ba2944bc8d066cc7626c808724e8e5d209a16e7b3bc21db8\"" Apr 21 10:38:33.226631 containerd[1462]: time="2026-04-21T10:38:33.226572967Z" level=info msg="CreateContainer within sandbox \"cb38db675ef4ebd79108c1ef884ee13cbfed8684f63e969550b7d676aa14d836\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:38:33.227584 kubelet[2137]: E0421 10:38:33.227542 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:33.230791 containerd[1462]: time="2026-04-21T10:38:33.230727402Z" level=info msg="CreateContainer within sandbox \"fde061d36fd77614ba2944bc8d066cc7626c808724e8e5d209a16e7b3bc21db8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:38:33.242481 containerd[1462]: time="2026-04-21T10:38:33.242403233Z" level=info msg="CreateContainer within sandbox \"e8b5cb50f105b4cf1eccfff58536e0df318b06081637242415b6d3dc278334a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20958d9ddd414b96f0cd18af6f5132a967a93b8ad78b9f53d7b78abb6be8b996\"" Apr 21 10:38:33.243185 containerd[1462]: time="2026-04-21T10:38:33.243105377Z" level=info msg="StartContainer for \"20958d9ddd414b96f0cd18af6f5132a967a93b8ad78b9f53d7b78abb6be8b996\"" Apr 21 10:38:33.247254 containerd[1462]: time="2026-04-21T10:38:33.247179115Z" level=info msg="CreateContainer within sandbox \"cb38db675ef4ebd79108c1ef884ee13cbfed8684f63e969550b7d676aa14d836\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"640077885a7b0402bc2a41cb8016bb32123ebdf95cab63bff3dc6120fe0b343b\"" Apr 21 10:38:33.247594 containerd[1462]: time="2026-04-21T10:38:33.247525306Z" level=info msg="StartContainer for \"640077885a7b0402bc2a41cb8016bb32123ebdf95cab63bff3dc6120fe0b343b\"" Apr 21 10:38:33.250642 containerd[1462]: time="2026-04-21T10:38:33.250567392Z" level=info msg="CreateContainer within sandbox \"fde061d36fd77614ba2944bc8d066cc7626c808724e8e5d209a16e7b3bc21db8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c3ae26bb5aa6030feed936403aad8f997959d15d2ad2f0b9cced63e90c71d7a\"" Apr 21 10:38:33.251014 containerd[1462]: time="2026-04-21T10:38:33.250965486Z" level=info msg="StartContainer for \"3c3ae26bb5aa6030feed936403aad8f997959d15d2ad2f0b9cced63e90c71d7a\"" Apr 21 10:38:33.267969 systemd[1]: Started cri-containerd-20958d9ddd414b96f0cd18af6f5132a967a93b8ad78b9f53d7b78abb6be8b996.scope - libcontainer container 20958d9ddd414b96f0cd18af6f5132a967a93b8ad78b9f53d7b78abb6be8b996. Apr 21 10:38:33.271344 systemd[1]: Started cri-containerd-640077885a7b0402bc2a41cb8016bb32123ebdf95cab63bff3dc6120fe0b343b.scope - libcontainer container 640077885a7b0402bc2a41cb8016bb32123ebdf95cab63bff3dc6120fe0b343b. Apr 21 10:38:33.275977 systemd[1]: Started cri-containerd-3c3ae26bb5aa6030feed936403aad8f997959d15d2ad2f0b9cced63e90c71d7a.scope - libcontainer container 3c3ae26bb5aa6030feed936403aad8f997959d15d2ad2f0b9cced63e90c71d7a. Apr 21 10:38:33.312193 containerd[1462]: time="2026-04-21T10:38:33.311662696Z" level=info msg="StartContainer for \"640077885a7b0402bc2a41cb8016bb32123ebdf95cab63bff3dc6120fe0b343b\" returns successfully" Apr 21 10:38:33.312193 containerd[1462]: time="2026-04-21T10:38:33.311737147Z" level=info msg="StartContainer for \"20958d9ddd414b96f0cd18af6f5132a967a93b8ad78b9f53d7b78abb6be8b996\" returns successfully" Apr 21 10:38:33.322216 containerd[1462]: time="2026-04-21T10:38:33.322150591Z" level=info msg="StartContainer for \"3c3ae26bb5aa6030feed936403aad8f997959d15d2ad2f0b9cced63e90c71d7a\" returns successfully" Apr 21 10:38:33.772360 kubelet[2137]: I0421 10:38:33.772189 2137 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:38:34.040386 kubelet[2137]: E0421 10:38:34.040258 2137 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:38:34.146201 kubelet[2137]: I0421 10:38:34.146159 2137 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:38:34.146201 kubelet[2137]: E0421 10:38:34.146201 2137 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:38:34.157667 kubelet[2137]: E0421 10:38:34.157511 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.240944 kubelet[2137]: E0421 10:38:34.240894 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:34.241321 kubelet[2137]: E0421 10:38:34.240994 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:34.242401 kubelet[2137]: E0421 10:38:34.242372 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:34.242499 kubelet[2137]: E0421 10:38:34.242483 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:34.243490 kubelet[2137]: E0421 10:38:34.243463 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:34.243583 kubelet[2137]: E0421 10:38:34.243565 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:34.257717 kubelet[2137]: E0421 10:38:34.257644 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.358193 kubelet[2137]: E0421 10:38:34.358056 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.458675 kubelet[2137]: E0421 10:38:34.458608 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.559801 kubelet[2137]: E0421 10:38:34.559696 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.660896 kubelet[2137]: E0421 10:38:34.660731 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.761602 kubelet[2137]: E0421 10:38:34.761440 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.862728 kubelet[2137]: E0421 10:38:34.862646 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:34.963880 kubelet[2137]: E0421 10:38:34.963516 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.064517 kubelet[2137]: E0421 10:38:35.064443 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.165431 kubelet[2137]: E0421 10:38:35.165359 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.246196 kubelet[2137]: E0421 10:38:35.246027 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:35.246196 kubelet[2137]: E0421 10:38:35.246109 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:35.246196 kubelet[2137]: E0421 10:38:35.246163 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:35.246196 kubelet[2137]: E0421 10:38:35.246188 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:35.246569 kubelet[2137]: E0421 10:38:35.246238 2137 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:38:35.246569 kubelet[2137]: E0421 10:38:35.246500 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:35.266559 kubelet[2137]: E0421 10:38:35.266500 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.367518 kubelet[2137]: E0421 10:38:35.367472 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.468552 kubelet[2137]: E0421 10:38:35.468454 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.569349 kubelet[2137]: E0421 10:38:35.569163 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.669912 kubelet[2137]: E0421 10:38:35.669816 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.770680 kubelet[2137]: E0421 10:38:35.770607 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.871375 kubelet[2137]: E0421 10:38:35.871163 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:35.892112 systemd[1]: Reloading requested from client PID 2426 ('systemctl') (unit session-7.scope)... Apr 21 10:38:35.892139 systemd[1]: Reloading... Apr 21 10:38:35.938867 zram_generator::config[2466]: No configuration found. Apr 21 10:38:35.972103 kubelet[2137]: E0421 10:38:35.972004 2137 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:38:36.016125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:38:36.071708 systemd[1]: Reloading finished in 179 ms. Apr 21 10:38:36.107569 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:36.125084 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:38:36.125286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:36.132304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:38:36.238900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:38:36.243228 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:38:36.282646 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:38:36.288558 kubelet[2510]: I0421 10:38:36.288513 2510 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:38:36.288558 kubelet[2510]: I0421 10:38:36.288553 2510 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:38:36.288558 kubelet[2510]: I0421 10:38:36.288565 2510 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:38:36.288558 kubelet[2510]: I0421 10:38:36.288569 2510 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:38:36.288832 kubelet[2510]: I0421 10:38:36.288816 2510 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:38:36.289925 kubelet[2510]: I0421 10:38:36.289905 2510 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:38:36.292249 kubelet[2510]: I0421 10:38:36.292180 2510 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:38:36.297005 kubelet[2510]: E0421 10:38:36.296966 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:38:36.297182 kubelet[2510]: I0421 10:38:36.297159 2510 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:38:36.301145 kubelet[2510]: I0421 10:38:36.301106 2510 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:38:36.301410 kubelet[2510]: I0421 10:38:36.301318 2510 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:38:36.301618 kubelet[2510]: I0421 10:38:36.301340 2510 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:38:36.301824 kubelet[2510]: I0421 10:38:36.301644 2510 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:38:36.301824 kubelet[2510]: I0421 10:38:36.301659 2510 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:38:36.301824 kubelet[2510]: I0421 10:38:36.301689 2510 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:38:36.301949 kubelet[2510]: I0421 10:38:36.301928 2510 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:38:36.302160 kubelet[2510]: I0421 10:38:36.302127 2510 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:38:36.302160 kubelet[2510]: I0421 10:38:36.302152 2510 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:38:36.302194 kubelet[2510]: I0421 10:38:36.302165 2510 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:38:36.302194 kubelet[2510]: I0421 10:38:36.302174 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:38:36.305480 kubelet[2510]: I0421 10:38:36.303706 2510 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:38:36.307791 kubelet[2510]: I0421 10:38:36.307671 2510 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:38:36.307845 kubelet[2510]: I0421 10:38:36.307799 2510 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:38:36.312729 kubelet[2510]: I0421 10:38:36.312715 2510 server.go:1257] "Started kubelet" Apr 21 10:38:36.314784 kubelet[2510]: I0421 10:38:36.314312 2510 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:38:36.314784 kubelet[2510]: I0421 10:38:36.314423 2510 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:38:36.315607 kubelet[2510]: I0421 10:38:36.315511 2510 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:38:36.315658 kubelet[2510]: I0421 10:38:36.315610 2510 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:38:36.315900 kubelet[2510]: I0421 10:38:36.315855 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:38:36.315900 kubelet[2510]: I0421 10:38:36.315876 2510 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:38:36.317461 kubelet[2510]: I0421 10:38:36.317279 2510 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:38:36.318154 kubelet[2510]: I0421 10:38:36.318121 2510 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:38:36.318527 kubelet[2510]: I0421 10:38:36.318461 2510 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:38:36.318917 kubelet[2510]: I0421 10:38:36.318891 2510 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:38:36.319521 kubelet[2510]: I0421 10:38:36.319463 2510 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:38:36.319668 kubelet[2510]: I0421 10:38:36.319574 2510 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:38:36.322588 kubelet[2510]: I0421 10:38:36.322541 2510 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:38:36.324184 kubelet[2510]: E0421 10:38:36.324148 2510 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:38:36.331726 kubelet[2510]: I0421 10:38:36.331620 2510 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:38:36.332814 kubelet[2510]: I0421 10:38:36.332708 2510 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:38:36.332814 kubelet[2510]: I0421 10:38:36.332738 2510 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:38:36.332814 kubelet[2510]: I0421 10:38:36.332798 2510 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:38:36.332930 kubelet[2510]: E0421 10:38:36.332840 2510 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359512 2510 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359530 2510 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359548 2510 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359656 2510 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359664 2510 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359679 2510 policy_none.go:50] "Start" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359686 2510 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359741 2510 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.359968 2510 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:38:36.360220 kubelet[2510]: I0421 10:38:36.360031 2510 policy_none.go:44] "Start" Apr 21 10:38:36.364499 kubelet[2510]: E0421 10:38:36.364430 2510 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:38:36.364621 kubelet[2510]: I0421 10:38:36.364597 2510 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:38:36.364647 kubelet[2510]: I0421 10:38:36.364606 2510 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:38:36.365294 kubelet[2510]: I0421 10:38:36.365226 2510 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:38:36.366016 kubelet[2510]: E0421 10:38:36.365999 2510 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:38:36.434679 kubelet[2510]: I0421 10:38:36.434460 2510 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:36.434679 kubelet[2510]: I0421 10:38:36.434510 2510 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:36.435487 kubelet[2510]: I0421 10:38:36.434510 2510 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:38:36.472550 kubelet[2510]: I0421 10:38:36.472515 2510 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:38:36.479835 kubelet[2510]: I0421 10:38:36.479794 2510 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 21 10:38:36.479922 kubelet[2510]: I0421 10:38:36.479857 2510 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:38:36.619992 kubelet[2510]: I0421 10:38:36.619853 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:36.619992 kubelet[2510]: I0421 10:38:36.619885 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:36.619992 kubelet[2510]: I0421 10:38:36.619909 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:36.619992 kubelet[2510]: I0421 10:38:36.619923 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97a74fc1e15fd757e682914ee4a04e16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97a74fc1e15fd757e682914ee4a04e16\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:36.619992 kubelet[2510]: I0421 10:38:36.619961 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97a74fc1e15fd757e682914ee4a04e16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97a74fc1e15fd757e682914ee4a04e16\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:36.620205 kubelet[2510]: I0421 10:38:36.620003 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:36.620205 kubelet[2510]: I0421 10:38:36.620030 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:38:36.620205 kubelet[2510]: I0421 10:38:36.620082 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:38:36.620205 kubelet[2510]: I0421 10:38:36.620102 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97a74fc1e15fd757e682914ee4a04e16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97a74fc1e15fd757e682914ee4a04e16\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:36.743030 kubelet[2510]: E0421 10:38:36.742903 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:36.743412 kubelet[2510]: E0421 10:38:36.743186 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:36.743412 kubelet[2510]: E0421 10:38:36.743388 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:37.303105 kubelet[2510]: I0421 10:38:37.303014 2510 apiserver.go:52] "Watching apiserver" Apr 21 10:38:37.319808 kubelet[2510]: I0421 10:38:37.319676 2510 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:38:37.343197 kubelet[2510]: I0421 10:38:37.343079 2510 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:37.343309 kubelet[2510]: E0421 10:38:37.343212 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:37.343629 kubelet[2510]: E0421 10:38:37.343565 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:37.350393 kubelet[2510]: E0421 10:38:37.349924 2510 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:38:37.350393 kubelet[2510]: E0421 10:38:37.350099 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:37.359328 kubelet[2510]: I0421 10:38:37.359201 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.359189176 podStartE2EDuration="1.359189176s" podCreationTimestamp="2026-04-21 10:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:38:37.358936121 +0000 UTC m=+1.112268300" watchObservedRunningTime="2026-04-21 10:38:37.359189176 +0000 UTC m=+1.112521352" Apr 21 10:38:37.370983 kubelet[2510]: I0421 10:38:37.370846 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.370834089 podStartE2EDuration="1.370834089s" podCreationTimestamp="2026-04-21 10:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:38:37.365089459 +0000 UTC m=+1.118421631" watchObservedRunningTime="2026-04-21 10:38:37.370834089 +0000 UTC m=+1.124166253" Apr 21 10:38:38.344489 kubelet[2510]: E0421 10:38:38.344285 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:38.344489 kubelet[2510]: E0421 10:38:38.344368 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:39.348640 kubelet[2510]: E0421 10:38:39.348469 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:39.355285 kubelet[2510]: I0421 10:38:39.355177 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.355166687 podStartE2EDuration="3.355166687s" podCreationTimestamp="2026-04-21 10:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:38:37.371083833 +0000 UTC m=+1.124416018" watchObservedRunningTime="2026-04-21 10:38:39.355166687 +0000 UTC m=+3.108498851" Apr 21 10:38:41.206791 kubelet[2510]: E0421 10:38:41.206452 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:43.145283 kubelet[2510]: I0421 10:38:43.145225 2510 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:38:43.145642 containerd[1462]: time="2026-04-21T10:38:43.145593597Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:38:43.145850 kubelet[2510]: I0421 10:38:43.145804 2510 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:38:44.162903 systemd[1]: Created slice kubepods-besteffort-pod648431cc_4ef9_4462_bfc6_a82149b3510a.slice - libcontainer container kubepods-besteffort-pod648431cc_4ef9_4462_bfc6_a82149b3510a.slice. Apr 21 10:38:44.168226 kubelet[2510]: I0421 10:38:44.168186 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/648431cc-4ef9-4462-bfc6-a82149b3510a-kube-proxy\") pod \"kube-proxy-tp2sg\" (UID: \"648431cc-4ef9-4462-bfc6-a82149b3510a\") " pod="kube-system/kube-proxy-tp2sg" Apr 21 10:38:44.168226 kubelet[2510]: I0421 10:38:44.168222 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/648431cc-4ef9-4462-bfc6-a82149b3510a-xtables-lock\") pod \"kube-proxy-tp2sg\" (UID: \"648431cc-4ef9-4462-bfc6-a82149b3510a\") " pod="kube-system/kube-proxy-tp2sg" Apr 21 10:38:44.168513 kubelet[2510]: I0421 10:38:44.168237 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/648431cc-4ef9-4462-bfc6-a82149b3510a-lib-modules\") pod \"kube-proxy-tp2sg\" (UID: \"648431cc-4ef9-4462-bfc6-a82149b3510a\") " pod="kube-system/kube-proxy-tp2sg" Apr 21 10:38:44.168513 kubelet[2510]: I0421 10:38:44.168265 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwj4\" (UniqueName: \"kubernetes.io/projected/648431cc-4ef9-4462-bfc6-a82149b3510a-kube-api-access-9cwj4\") pod \"kube-proxy-tp2sg\" (UID: \"648431cc-4ef9-4462-bfc6-a82149b3510a\") " pod="kube-system/kube-proxy-tp2sg" Apr 21 10:38:44.376170 systemd[1]: Created slice kubepods-besteffort-podc51d996e_5a15_42ae_9752_4eabe4531150.slice - libcontainer container kubepods-besteffort-podc51d996e_5a15_42ae_9752_4eabe4531150.slice. Apr 21 10:38:44.471071 kubelet[2510]: I0421 10:38:44.470838 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndjz7\" (UniqueName: \"kubernetes.io/projected/c51d996e-5a15-42ae-9752-4eabe4531150-kube-api-access-ndjz7\") pod \"tigera-operator-6cf4cccc57-p4m8b\" (UID: \"c51d996e-5a15-42ae-9752-4eabe4531150\") " pod="tigera-operator/tigera-operator-6cf4cccc57-p4m8b" Apr 21 10:38:44.471071 kubelet[2510]: I0421 10:38:44.470902 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c51d996e-5a15-42ae-9752-4eabe4531150-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-p4m8b\" (UID: \"c51d996e-5a15-42ae-9752-4eabe4531150\") " pod="tigera-operator/tigera-operator-6cf4cccc57-p4m8b" Apr 21 10:38:44.476679 kubelet[2510]: E0421 10:38:44.476578 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:44.477404 containerd[1462]: time="2026-04-21T10:38:44.477359914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tp2sg,Uid:648431cc-4ef9-4462-bfc6-a82149b3510a,Namespace:kube-system,Attempt:0,}" Apr 21 10:38:44.498553 containerd[1462]: time="2026-04-21T10:38:44.497963010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:44.498553 containerd[1462]: time="2026-04-21T10:38:44.498502542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:44.498553 containerd[1462]: time="2026-04-21T10:38:44.498512779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:44.498695 containerd[1462]: time="2026-04-21T10:38:44.498599951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:44.521941 systemd[1]: Started cri-containerd-8f487a972110cd6dbf890a76d3ed12691884c0d651ac584ac51c0de449267ede.scope - libcontainer container 8f487a972110cd6dbf890a76d3ed12691884c0d651ac584ac51c0de449267ede. Apr 21 10:38:44.538725 containerd[1462]: time="2026-04-21T10:38:44.538689854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tp2sg,Uid:648431cc-4ef9-4462-bfc6-a82149b3510a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f487a972110cd6dbf890a76d3ed12691884c0d651ac584ac51c0de449267ede\"" Apr 21 10:38:44.539413 kubelet[2510]: E0421 10:38:44.539387 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:44.544516 containerd[1462]: time="2026-04-21T10:38:44.544493473Z" level=info msg="CreateContainer within sandbox \"8f487a972110cd6dbf890a76d3ed12691884c0d651ac584ac51c0de449267ede\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:38:44.555980 containerd[1462]: time="2026-04-21T10:38:44.555938020Z" level=info msg="CreateContainer within sandbox \"8f487a972110cd6dbf890a76d3ed12691884c0d651ac584ac51c0de449267ede\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcc31d428caa84d4d32038499994053df7872cf52ef8b4327f6efd64b19e45f0\"" Apr 21 10:38:44.556406 containerd[1462]: time="2026-04-21T10:38:44.556360986Z" level=info msg="StartContainer for \"dcc31d428caa84d4d32038499994053df7872cf52ef8b4327f6efd64b19e45f0\"" Apr 21 10:38:44.567379 kubelet[2510]: E0421 10:38:44.567031 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:44.582963 systemd[1]: Started cri-containerd-dcc31d428caa84d4d32038499994053df7872cf52ef8b4327f6efd64b19e45f0.scope - libcontainer container dcc31d428caa84d4d32038499994053df7872cf52ef8b4327f6efd64b19e45f0. Apr 21 10:38:44.604336 containerd[1462]: time="2026-04-21T10:38:44.604298931Z" level=info msg="StartContainer for \"dcc31d428caa84d4d32038499994053df7872cf52ef8b4327f6efd64b19e45f0\" returns successfully" Apr 21 10:38:44.681906 containerd[1462]: time="2026-04-21T10:38:44.681863081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-p4m8b,Uid:c51d996e-5a15-42ae-9752-4eabe4531150,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:38:44.704962 containerd[1462]: time="2026-04-21T10:38:44.704873131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:44.704962 containerd[1462]: time="2026-04-21T10:38:44.704930878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:44.704962 containerd[1462]: time="2026-04-21T10:38:44.704939579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:44.705145 containerd[1462]: time="2026-04-21T10:38:44.704991009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:44.722933 systemd[1]: Started cri-containerd-890d02ca5d087f6c7d20d0a3fe122c67deddf656fb0ca684beea54e213f4e148.scope - libcontainer container 890d02ca5d087f6c7d20d0a3fe122c67deddf656fb0ca684beea54e213f4e148. Apr 21 10:38:44.753110 containerd[1462]: time="2026-04-21T10:38:44.753037586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-p4m8b,Uid:c51d996e-5a15-42ae-9752-4eabe4531150,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"890d02ca5d087f6c7d20d0a3fe122c67deddf656fb0ca684beea54e213f4e148\"" Apr 21 10:38:44.754712 containerd[1462]: time="2026-04-21T10:38:44.754660183Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:38:45.357935 kubelet[2510]: E0421 10:38:45.357903 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:45.366075 kubelet[2510]: I0421 10:38:45.365858 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tp2sg" podStartSLOduration=1.36584647 podStartE2EDuration="1.36584647s" podCreationTimestamp="2026-04-21 10:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:38:45.365842518 +0000 UTC m=+9.119174685" watchObservedRunningTime="2026-04-21 10:38:45.36584647 +0000 UTC m=+9.119178644" Apr 21 10:38:45.787500 kubelet[2510]: E0421 10:38:45.787306 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:46.087006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259043426.mount: Deactivated successfully. Apr 21 10:38:46.682029 containerd[1462]: time="2026-04-21T10:38:46.681961114Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:46.682717 containerd[1462]: time="2026-04-21T10:38:46.682676283Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:38:46.683628 containerd[1462]: time="2026-04-21T10:38:46.683601645Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:46.685585 containerd[1462]: time="2026-04-21T10:38:46.685528284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:46.686225 containerd[1462]: time="2026-04-21T10:38:46.686151267Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.931442686s" Apr 21 10:38:46.686225 containerd[1462]: time="2026-04-21T10:38:46.686187593Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:38:46.690661 containerd[1462]: time="2026-04-21T10:38:46.690594990Z" level=info msg="CreateContainer within sandbox \"890d02ca5d087f6c7d20d0a3fe122c67deddf656fb0ca684beea54e213f4e148\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:38:46.700206 containerd[1462]: time="2026-04-21T10:38:46.700166491Z" level=info msg="CreateContainer within sandbox \"890d02ca5d087f6c7d20d0a3fe122c67deddf656fb0ca684beea54e213f4e148\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aa9c007b7c08a3ba15a59a3e2a2f90d08f1e35f9a1f64e3c74ffd4656b7affa6\"" Apr 21 10:38:46.700648 containerd[1462]: time="2026-04-21T10:38:46.700603969Z" level=info msg="StartContainer for \"aa9c007b7c08a3ba15a59a3e2a2f90d08f1e35f9a1f64e3c74ffd4656b7affa6\"" Apr 21 10:38:46.727912 systemd[1]: Started cri-containerd-aa9c007b7c08a3ba15a59a3e2a2f90d08f1e35f9a1f64e3c74ffd4656b7affa6.scope - libcontainer container aa9c007b7c08a3ba15a59a3e2a2f90d08f1e35f9a1f64e3c74ffd4656b7affa6. Apr 21 10:38:46.746470 containerd[1462]: time="2026-04-21T10:38:46.746419744Z" level=info msg="StartContainer for \"aa9c007b7c08a3ba15a59a3e2a2f90d08f1e35f9a1f64e3c74ffd4656b7affa6\" returns successfully" Apr 21 10:38:47.371641 kubelet[2510]: I0421 10:38:47.371538 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-p4m8b" podStartSLOduration=1.438997042 podStartE2EDuration="3.371526302s" podCreationTimestamp="2026-04-21 10:38:44 +0000 UTC" firstStartedPulling="2026-04-21 10:38:44.754413654 +0000 UTC m=+8.507745819" lastFinishedPulling="2026-04-21 10:38:46.686942915 +0000 UTC m=+10.440275079" observedRunningTime="2026-04-21 10:38:47.371382827 +0000 UTC m=+11.124714997" watchObservedRunningTime="2026-04-21 10:38:47.371526302 +0000 UTC m=+11.124858481" Apr 21 10:38:51.215903 kubelet[2510]: E0421 10:38:51.215829 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:51.483911 sudo[1635]: pam_unix(sudo:session): session closed for user root Apr 21 10:38:51.485966 sshd[1632]: pam_unix(sshd:session): session closed for user core Apr 21 10:38:51.488921 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:59170.service: Deactivated successfully. Apr 21 10:38:51.494207 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:38:51.494345 systemd[1]: session-7.scope: Consumed 3.093s CPU time, 160.2M memory peak, 0B memory swap peak. Apr 21 10:38:51.496185 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:38:51.497081 systemd-logind[1440]: Removed session 7. Apr 21 10:38:52.833689 systemd[1]: Created slice kubepods-besteffort-pod11c00500_f3df_4b3a_98c5_4bc1b6dd3135.slice - libcontainer container kubepods-besteffort-pod11c00500_f3df_4b3a_98c5_4bc1b6dd3135.slice. Apr 21 10:38:52.851120 systemd[1]: Created slice kubepods-besteffort-pod97bee1a7_eca3_4239_8278_6c7d89e8ed74.slice - libcontainer container kubepods-besteffort-pod97bee1a7_eca3_4239_8278_6c7d89e8ed74.slice. Apr 21 10:38:52.926922 kubelet[2510]: I0421 10:38:52.926884 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-cni-log-dir\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.926922 kubelet[2510]: I0421 10:38:52.926926 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/11c00500-f3df-4b3a-98c5-4bc1b6dd3135-typha-certs\") pod \"calico-typha-f96fbbbd5-jcnr4\" (UID: \"11c00500-f3df-4b3a-98c5-4bc1b6dd3135\") " pod="calico-system/calico-typha-f96fbbbd5-jcnr4" Apr 21 10:38:52.927267 kubelet[2510]: I0421 10:38:52.926941 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-flexvol-driver-host\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927267 kubelet[2510]: I0421 10:38:52.926958 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97bee1a7-eca3-4239-8278-6c7d89e8ed74-tigera-ca-bundle\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927267 kubelet[2510]: I0421 10:38:52.926973 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-var-lib-calico\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927267 kubelet[2510]: I0421 10:38:52.927018 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-lib-modules\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927267 kubelet[2510]: I0421 10:38:52.927043 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97bee1a7-eca3-4239-8278-6c7d89e8ed74-node-certs\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927371 kubelet[2510]: I0421 10:38:52.927058 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmm2\" (UniqueName: \"kubernetes.io/projected/97bee1a7-eca3-4239-8278-6c7d89e8ed74-kube-api-access-btmm2\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927371 kubelet[2510]: I0421 10:38:52.927072 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nfp\" (UniqueName: \"kubernetes.io/projected/11c00500-f3df-4b3a-98c5-4bc1b6dd3135-kube-api-access-58nfp\") pod \"calico-typha-f96fbbbd5-jcnr4\" (UID: \"11c00500-f3df-4b3a-98c5-4bc1b6dd3135\") " pod="calico-system/calico-typha-f96fbbbd5-jcnr4" Apr 21 10:38:52.927371 kubelet[2510]: I0421 10:38:52.927126 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-cni-bin-dir\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927371 kubelet[2510]: I0421 10:38:52.927142 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-nodeproc\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927371 kubelet[2510]: I0421 10:38:52.927218 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-xtables-lock\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927474 kubelet[2510]: I0421 10:38:52.927365 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11c00500-f3df-4b3a-98c5-4bc1b6dd3135-tigera-ca-bundle\") pod \"calico-typha-f96fbbbd5-jcnr4\" (UID: \"11c00500-f3df-4b3a-98c5-4bc1b6dd3135\") " pod="calico-system/calico-typha-f96fbbbd5-jcnr4" Apr 21 10:38:52.927474 kubelet[2510]: I0421 10:38:52.927405 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-cni-net-dir\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927474 kubelet[2510]: I0421 10:38:52.927436 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-bpffs\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927474 kubelet[2510]: I0421 10:38:52.927460 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-sys-fs\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927606 kubelet[2510]: I0421 10:38:52.927494 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-policysync\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.927606 kubelet[2510]: I0421 10:38:52.927512 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97bee1a7-eca3-4239-8278-6c7d89e8ed74-var-run-calico\") pod \"calico-node-d5t6r\" (UID: \"97bee1a7-eca3-4239-8278-6c7d89e8ed74\") " pod="calico-system/calico-node-d5t6r" Apr 21 10:38:52.951294 kubelet[2510]: E0421 10:38:52.950308 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:38:53.028118 kubelet[2510]: I0421 10:38:53.028036 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbxwh\" (UniqueName: \"kubernetes.io/projected/9fdc5648-d90a-492e-8550-ef4cb967e14b-kube-api-access-xbxwh\") pod \"csi-node-driver-kshx6\" (UID: \"9fdc5648-d90a-492e-8550-ef4cb967e14b\") " pod="calico-system/csi-node-driver-kshx6" Apr 21 10:38:53.028428 kubelet[2510]: I0421 10:38:53.028208 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9fdc5648-d90a-492e-8550-ef4cb967e14b-varrun\") pod \"csi-node-driver-kshx6\" (UID: \"9fdc5648-d90a-492e-8550-ef4cb967e14b\") " pod="calico-system/csi-node-driver-kshx6" Apr 21 10:38:53.030189 kubelet[2510]: I0421 10:38:53.028829 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9fdc5648-d90a-492e-8550-ef4cb967e14b-socket-dir\") pod \"csi-node-driver-kshx6\" (UID: \"9fdc5648-d90a-492e-8550-ef4cb967e14b\") " pod="calico-system/csi-node-driver-kshx6" Apr 21 10:38:53.030189 kubelet[2510]: I0421 10:38:53.028893 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fdc5648-d90a-492e-8550-ef4cb967e14b-kubelet-dir\") pod \"csi-node-driver-kshx6\" (UID: \"9fdc5648-d90a-492e-8550-ef4cb967e14b\") " pod="calico-system/csi-node-driver-kshx6" Apr 21 10:38:53.030189 kubelet[2510]: I0421 10:38:53.028907 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9fdc5648-d90a-492e-8550-ef4cb967e14b-registration-dir\") pod \"csi-node-driver-kshx6\" (UID: \"9fdc5648-d90a-492e-8550-ef4cb967e14b\") " pod="calico-system/csi-node-driver-kshx6" Apr 21 10:38:53.030668 kubelet[2510]: E0421 10:38:53.030619 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.030668 kubelet[2510]: W0421 10:38:53.030668 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.030747 kubelet[2510]: E0421 10:38:53.030683 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.030886 kubelet[2510]: E0421 10:38:53.030866 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.030940 kubelet[2510]: W0421 10:38:53.030888 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.030940 kubelet[2510]: E0421 10:38:53.030896 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.031138 kubelet[2510]: E0421 10:38:53.031096 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.031138 kubelet[2510]: W0421 10:38:53.031130 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.031138 kubelet[2510]: E0421 10:38:53.031137 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031241 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.032575 kubelet[2510]: W0421 10:38:53.031248 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031253 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031353 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.032575 kubelet[2510]: W0421 10:38:53.031363 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031369 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031467 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.032575 kubelet[2510]: W0421 10:38:53.031471 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031476 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.032575 kubelet[2510]: E0421 10:38:53.031567 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.032833 kubelet[2510]: W0421 10:38:53.031571 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.032833 kubelet[2510]: E0421 10:38:53.031576 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.037048 kubelet[2510]: E0421 10:38:53.036430 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.037048 kubelet[2510]: W0421 10:38:53.036441 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.037048 kubelet[2510]: E0421 10:38:53.036452 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.037048 kubelet[2510]: E0421 10:38:53.036748 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.037048 kubelet[2510]: W0421 10:38:53.036820 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.037048 kubelet[2510]: E0421 10:38:53.036829 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.037048 kubelet[2510]: E0421 10:38:53.037052 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.037215 kubelet[2510]: W0421 10:38:53.037060 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.037215 kubelet[2510]: E0421 10:38:53.037138 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.037568 kubelet[2510]: E0421 10:38:53.037545 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.037568 kubelet[2510]: W0421 10:38:53.037562 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.037568 kubelet[2510]: E0421 10:38:53.037569 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.131843 kubelet[2510]: E0421 10:38:53.131059 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.131843 kubelet[2510]: W0421 10:38:53.131085 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.131843 kubelet[2510]: E0421 10:38:53.131128 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.131843 kubelet[2510]: E0421 10:38:53.131383 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.131843 kubelet[2510]: W0421 10:38:53.131394 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.131843 kubelet[2510]: E0421 10:38:53.131406 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.132242 kubelet[2510]: E0421 10:38:53.132132 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.132242 kubelet[2510]: W0421 10:38:53.132145 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.132242 kubelet[2510]: E0421 10:38:53.132157 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.132486 kubelet[2510]: E0421 10:38:53.132424 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.132486 kubelet[2510]: W0421 10:38:53.132446 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.132486 kubelet[2510]: E0421 10:38:53.132457 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.132797 kubelet[2510]: E0421 10:38:53.132737 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.132797 kubelet[2510]: W0421 10:38:53.132785 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.132885 kubelet[2510]: E0421 10:38:53.132796 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.133167 kubelet[2510]: E0421 10:38:53.133143 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.133167 kubelet[2510]: W0421 10:38:53.133160 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.133167 kubelet[2510]: E0421 10:38:53.133167 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.133467 kubelet[2510]: E0421 10:38:53.133406 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.133467 kubelet[2510]: W0421 10:38:53.133423 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.133467 kubelet[2510]: E0421 10:38:53.133433 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.133677 kubelet[2510]: E0421 10:38:53.133657 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.133677 kubelet[2510]: W0421 10:38:53.133672 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.133738 kubelet[2510]: E0421 10:38:53.133679 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.133906 kubelet[2510]: E0421 10:38:53.133885 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.133906 kubelet[2510]: W0421 10:38:53.133899 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.133906 kubelet[2510]: E0421 10:38:53.133905 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.134200 kubelet[2510]: E0421 10:38:53.134145 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.134200 kubelet[2510]: W0421 10:38:53.134162 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.134200 kubelet[2510]: E0421 10:38:53.134168 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.134422 kubelet[2510]: E0421 10:38:53.134402 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.134422 kubelet[2510]: W0421 10:38:53.134416 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.134454 kubelet[2510]: E0421 10:38:53.134422 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.134606 kubelet[2510]: E0421 10:38:53.134591 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.134606 kubelet[2510]: W0421 10:38:53.134604 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.134643 kubelet[2510]: E0421 10:38:53.134609 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.134822 kubelet[2510]: E0421 10:38:53.134808 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.134822 kubelet[2510]: W0421 10:38:53.134822 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.134860 kubelet[2510]: E0421 10:38:53.134827 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.135004 kubelet[2510]: E0421 10:38:53.134988 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.135004 kubelet[2510]: W0421 10:38:53.135002 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.135045 kubelet[2510]: E0421 10:38:53.135007 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.135204 kubelet[2510]: E0421 10:38:53.135185 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.135204 kubelet[2510]: W0421 10:38:53.135198 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.135236 kubelet[2510]: E0421 10:38:53.135205 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.135371 kubelet[2510]: E0421 10:38:53.135356 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.135371 kubelet[2510]: W0421 10:38:53.135370 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.135409 kubelet[2510]: E0421 10:38:53.135375 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.135589 kubelet[2510]: E0421 10:38:53.135570 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.135608 kubelet[2510]: W0421 10:38:53.135589 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.135608 kubelet[2510]: E0421 10:38:53.135598 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.135857 kubelet[2510]: E0421 10:38:53.135841 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.135881 kubelet[2510]: W0421 10:38:53.135857 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.135881 kubelet[2510]: E0421 10:38:53.135864 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.136048 kubelet[2510]: E0421 10:38:53.136033 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.136048 kubelet[2510]: W0421 10:38:53.136046 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.136083 kubelet[2510]: E0421 10:38:53.136052 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.136288 kubelet[2510]: E0421 10:38:53.136269 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.136288 kubelet[2510]: W0421 10:38:53.136284 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.136288 kubelet[2510]: E0421 10:38:53.136290 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.136655 kubelet[2510]: E0421 10:38:53.136630 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.136655 kubelet[2510]: W0421 10:38:53.136652 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.136713 kubelet[2510]: E0421 10:38:53.136662 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.136943 kubelet[2510]: E0421 10:38:53.136921 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.136943 kubelet[2510]: W0421 10:38:53.136940 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.137377 kubelet[2510]: E0421 10:38:53.136950 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.137377 kubelet[2510]: E0421 10:38:53.137150 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.137377 kubelet[2510]: W0421 10:38:53.137156 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.137377 kubelet[2510]: E0421 10:38:53.137162 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.137377 kubelet[2510]: E0421 10:38:53.137332 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.137377 kubelet[2510]: W0421 10:38:53.137338 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.137377 kubelet[2510]: E0421 10:38:53.137344 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.137591 kubelet[2510]: E0421 10:38:53.137569 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.137613 kubelet[2510]: W0421 10:38:53.137592 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.137613 kubelet[2510]: E0421 10:38:53.137601 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.138707 kubelet[2510]: E0421 10:38:53.138679 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:53.139290 containerd[1462]: time="2026-04-21T10:38:53.139253132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f96fbbbd5-jcnr4,Uid:11c00500-f3df-4b3a-98c5-4bc1b6dd3135,Namespace:calico-system,Attempt:0,}" Apr 21 10:38:53.144382 kubelet[2510]: E0421 10:38:53.144359 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:53.144382 kubelet[2510]: W0421 10:38:53.144379 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:53.144456 kubelet[2510]: E0421 10:38:53.144414 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:53.155933 containerd[1462]: time="2026-04-21T10:38:53.155736872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5t6r,Uid:97bee1a7-eca3-4239-8278-6c7d89e8ed74,Namespace:calico-system,Attempt:0,}" Apr 21 10:38:53.160877 containerd[1462]: time="2026-04-21T10:38:53.160624495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:53.160877 containerd[1462]: time="2026-04-21T10:38:53.160681204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:53.160877 containerd[1462]: time="2026-04-21T10:38:53.160689934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:53.160877 containerd[1462]: time="2026-04-21T10:38:53.160747765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:53.177956 systemd[1]: Started cri-containerd-5ea3e3b493d767e9c44b1fb485fceae90cc885e628a2c0bf8bc870b3345c0a96.scope - libcontainer container 5ea3e3b493d767e9c44b1fb485fceae90cc885e628a2c0bf8bc870b3345c0a96. Apr 21 10:38:53.178569 containerd[1462]: time="2026-04-21T10:38:53.178066567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:38:53.178569 containerd[1462]: time="2026-04-21T10:38:53.178141149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:38:53.178569 containerd[1462]: time="2026-04-21T10:38:53.178314838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:53.179330 containerd[1462]: time="2026-04-21T10:38:53.179133065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:38:53.201048 systemd[1]: Started cri-containerd-a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831.scope - libcontainer container a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831. Apr 21 10:38:53.212675 containerd[1462]: time="2026-04-21T10:38:53.212619100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f96fbbbd5-jcnr4,Uid:11c00500-f3df-4b3a-98c5-4bc1b6dd3135,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ea3e3b493d767e9c44b1fb485fceae90cc885e628a2c0bf8bc870b3345c0a96\"" Apr 21 10:38:53.214655 kubelet[2510]: E0421 10:38:53.214147 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:53.220893 containerd[1462]: time="2026-04-21T10:38:53.220826397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:38:53.221319 containerd[1462]: time="2026-04-21T10:38:53.221257440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5t6r,Uid:97bee1a7-eca3-4239-8278-6c7d89e8ed74,Namespace:calico-system,Attempt:0,} returns sandbox id \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\"" Apr 21 10:38:54.333515 kubelet[2510]: E0421 10:38:54.333430 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:38:54.571830 kubelet[2510]: E0421 10:38:54.571745 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:54.630648 kubelet[2510]: E0421 10:38:54.630478 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.630648 kubelet[2510]: W0421 10:38:54.630514 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.630648 kubelet[2510]: E0421 10:38:54.630544 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.631188 kubelet[2510]: E0421 10:38:54.631158 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.631293 kubelet[2510]: W0421 10:38:54.631191 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.631293 kubelet[2510]: E0421 10:38:54.631204 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.631533 kubelet[2510]: E0421 10:38:54.631511 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.631604 kubelet[2510]: W0421 10:38:54.631534 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.631604 kubelet[2510]: E0421 10:38:54.631546 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.631874 kubelet[2510]: E0421 10:38:54.631856 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.631896 kubelet[2510]: W0421 10:38:54.631876 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.631896 kubelet[2510]: E0421 10:38:54.631885 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.632127 kubelet[2510]: E0421 10:38:54.632094 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.632157 kubelet[2510]: W0421 10:38:54.632128 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.632157 kubelet[2510]: E0421 10:38:54.632135 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.632301 kubelet[2510]: E0421 10:38:54.632282 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.632301 kubelet[2510]: W0421 10:38:54.632296 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.632340 kubelet[2510]: E0421 10:38:54.632301 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.632459 kubelet[2510]: E0421 10:38:54.632443 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.632459 kubelet[2510]: W0421 10:38:54.632456 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.632497 kubelet[2510]: E0421 10:38:54.632461 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.632619 kubelet[2510]: E0421 10:38:54.632603 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.632619 kubelet[2510]: W0421 10:38:54.632616 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.632652 kubelet[2510]: E0421 10:38:54.632621 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.632809 kubelet[2510]: E0421 10:38:54.632789 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.632809 kubelet[2510]: W0421 10:38:54.632804 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.632809 kubelet[2510]: E0421 10:38:54.632810 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.632977 kubelet[2510]: E0421 10:38:54.632959 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.632977 kubelet[2510]: W0421 10:38:54.632973 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.633010 kubelet[2510]: E0421 10:38:54.632978 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.633152 kubelet[2510]: E0421 10:38:54.633134 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.633152 kubelet[2510]: W0421 10:38:54.633148 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.633187 kubelet[2510]: E0421 10:38:54.633153 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.633311 kubelet[2510]: E0421 10:38:54.633294 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.633311 kubelet[2510]: W0421 10:38:54.633308 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.633342 kubelet[2510]: E0421 10:38:54.633314 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.633471 kubelet[2510]: E0421 10:38:54.633454 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.633471 kubelet[2510]: W0421 10:38:54.633468 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.633501 kubelet[2510]: E0421 10:38:54.633473 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.633628 kubelet[2510]: E0421 10:38:54.633612 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.633628 kubelet[2510]: W0421 10:38:54.633625 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.633659 kubelet[2510]: E0421 10:38:54.633630 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.633847 kubelet[2510]: E0421 10:38:54.633831 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.633847 kubelet[2510]: W0421 10:38:54.633846 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.633847 kubelet[2510]: E0421 10:38:54.633852 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.634016 kubelet[2510]: E0421 10:38:54.634000 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.634016 kubelet[2510]: W0421 10:38:54.634013 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.634045 kubelet[2510]: E0421 10:38:54.634018 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.634189 kubelet[2510]: E0421 10:38:54.634172 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.634189 kubelet[2510]: W0421 10:38:54.634186 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.634224 kubelet[2510]: E0421 10:38:54.634191 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.634348 kubelet[2510]: E0421 10:38:54.634332 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.634348 kubelet[2510]: W0421 10:38:54.634345 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.634378 kubelet[2510]: E0421 10:38:54.634351 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.634509 kubelet[2510]: E0421 10:38:54.634492 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.634509 kubelet[2510]: W0421 10:38:54.634506 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.634543 kubelet[2510]: E0421 10:38:54.634510 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.634664 kubelet[2510]: E0421 10:38:54.634648 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.634664 kubelet[2510]: W0421 10:38:54.634661 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.634697 kubelet[2510]: E0421 10:38:54.634666 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.634859 kubelet[2510]: E0421 10:38:54.634845 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.634859 kubelet[2510]: W0421 10:38:54.634859 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.634897 kubelet[2510]: E0421 10:38:54.634864 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.635021 kubelet[2510]: E0421 10:38:54.635005 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.635021 kubelet[2510]: W0421 10:38:54.635018 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.635052 kubelet[2510]: E0421 10:38:54.635023 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.635196 kubelet[2510]: E0421 10:38:54.635179 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.635196 kubelet[2510]: W0421 10:38:54.635192 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.635228 kubelet[2510]: E0421 10:38:54.635197 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.635354 kubelet[2510]: E0421 10:38:54.635338 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.635354 kubelet[2510]: W0421 10:38:54.635351 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.635383 kubelet[2510]: E0421 10:38:54.635356 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.635516 kubelet[2510]: E0421 10:38:54.635499 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:54.635516 kubelet[2510]: W0421 10:38:54.635512 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:54.635547 kubelet[2510]: E0421 10:38:54.635518 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:54.909513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380311978.mount: Deactivated successfully. Apr 21 10:38:55.791331 kubelet[2510]: E0421 10:38:55.791291 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:55.844397 kubelet[2510]: E0421 10:38:55.844345 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:55.844397 kubelet[2510]: W0421 10:38:55.844378 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:55.844397 kubelet[2510]: E0421 10:38:55.844397 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:55.844603 kubelet[2510]: E0421 10:38:55.844579 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:55.844603 kubelet[2510]: W0421 10:38:55.844598 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:55.844603 kubelet[2510]: E0421 10:38:55.844606 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:55.844900 kubelet[2510]: E0421 10:38:55.844882 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:55.844940 kubelet[2510]: W0421 10:38:55.844902 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:55.844940 kubelet[2510]: E0421 10:38:55.844910 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:55.845176 kubelet[2510]: E0421 10:38:55.845134 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:55.845176 kubelet[2510]: W0421 10:38:55.845163 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:55.845176 kubelet[2510]: E0421 10:38:55.845175 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:55.845403 kubelet[2510]: E0421 10:38:55.845387 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:55.845431 kubelet[2510]: W0421 10:38:55.845402 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:55.845431 kubelet[2510]: E0421 10:38:55.845409 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:56.334176 kubelet[2510]: E0421 10:38:56.334127 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:38:56.389904 containerd[1462]: time="2026-04-21T10:38:56.389858116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:56.391558 containerd[1462]: time="2026-04-21T10:38:56.391493841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:38:56.392647 containerd[1462]: time="2026-04-21T10:38:56.392616808Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:56.394743 containerd[1462]: time="2026-04-21T10:38:56.394690645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:56.395211 containerd[1462]: time="2026-04-21T10:38:56.395179833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.174317276s" Apr 21 10:38:56.395242 containerd[1462]: time="2026-04-21T10:38:56.395215595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:38:56.396663 containerd[1462]: time="2026-04-21T10:38:56.396181765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:38:56.409496 containerd[1462]: time="2026-04-21T10:38:56.409456152Z" level=info msg="CreateContainer within sandbox \"5ea3e3b493d767e9c44b1fb485fceae90cc885e628a2c0bf8bc870b3345c0a96\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:38:56.421931 containerd[1462]: time="2026-04-21T10:38:56.421873472Z" level=info msg="CreateContainer within sandbox \"5ea3e3b493d767e9c44b1fb485fceae90cc885e628a2c0bf8bc870b3345c0a96\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f33f58fe60d64837cdb3dd8bde0dcce12e2ddb5edd6814352925aed7a9c907b3\"" Apr 21 10:38:56.422549 containerd[1462]: time="2026-04-21T10:38:56.422499751Z" level=info msg="StartContainer for \"f33f58fe60d64837cdb3dd8bde0dcce12e2ddb5edd6814352925aed7a9c907b3\"" Apr 21 10:38:56.454047 systemd[1]: Started cri-containerd-f33f58fe60d64837cdb3dd8bde0dcce12e2ddb5edd6814352925aed7a9c907b3.scope - libcontainer container f33f58fe60d64837cdb3dd8bde0dcce12e2ddb5edd6814352925aed7a9c907b3. Apr 21 10:38:56.489399 containerd[1462]: time="2026-04-21T10:38:56.489071618Z" level=info msg="StartContainer for \"f33f58fe60d64837cdb3dd8bde0dcce12e2ddb5edd6814352925aed7a9c907b3\" returns successfully" Apr 21 10:38:57.390893 kubelet[2510]: E0421 10:38:57.390842 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:57.457572 kubelet[2510]: E0421 10:38:57.457506 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.457572 kubelet[2510]: W0421 10:38:57.457540 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.457572 kubelet[2510]: E0421 10:38:57.457565 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.457930 kubelet[2510]: E0421 10:38:57.457910 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.457930 kubelet[2510]: W0421 10:38:57.457928 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.457976 kubelet[2510]: E0421 10:38:57.457938 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.458212 kubelet[2510]: E0421 10:38:57.458183 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.458212 kubelet[2510]: W0421 10:38:57.458210 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.458301 kubelet[2510]: E0421 10:38:57.458225 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.458527 kubelet[2510]: E0421 10:38:57.458516 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.458550 kubelet[2510]: W0421 10:38:57.458526 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.458550 kubelet[2510]: E0421 10:38:57.458535 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.458816 kubelet[2510]: E0421 10:38:57.458793 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.458816 kubelet[2510]: W0421 10:38:57.458811 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.458897 kubelet[2510]: E0421 10:38:57.458819 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.459063 kubelet[2510]: E0421 10:38:57.459024 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.459063 kubelet[2510]: W0421 10:38:57.459046 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.459063 kubelet[2510]: E0421 10:38:57.459053 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.459272 kubelet[2510]: E0421 10:38:57.459254 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.459272 kubelet[2510]: W0421 10:38:57.459268 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.459306 kubelet[2510]: E0421 10:38:57.459274 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.459451 kubelet[2510]: E0421 10:38:57.459433 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.459451 kubelet[2510]: W0421 10:38:57.459448 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.459485 kubelet[2510]: E0421 10:38:57.459453 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.459639 kubelet[2510]: E0421 10:38:57.459620 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.459639 kubelet[2510]: W0421 10:38:57.459638 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.459669 kubelet[2510]: E0421 10:38:57.459647 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.459868 kubelet[2510]: E0421 10:38:57.459852 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.459868 kubelet[2510]: W0421 10:38:57.459867 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.459905 kubelet[2510]: E0421 10:38:57.459873 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.460048 kubelet[2510]: E0421 10:38:57.460030 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.460048 kubelet[2510]: W0421 10:38:57.460044 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.460080 kubelet[2510]: E0421 10:38:57.460049 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.460243 kubelet[2510]: E0421 10:38:57.460224 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.460243 kubelet[2510]: W0421 10:38:57.460239 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.460283 kubelet[2510]: E0421 10:38:57.460245 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.460422 kubelet[2510]: E0421 10:38:57.460403 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.460422 kubelet[2510]: W0421 10:38:57.460418 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.460454 kubelet[2510]: E0421 10:38:57.460423 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.460607 kubelet[2510]: E0421 10:38:57.460589 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.460607 kubelet[2510]: W0421 10:38:57.460604 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.460638 kubelet[2510]: E0421 10:38:57.460608 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.460810 kubelet[2510]: E0421 10:38:57.460794 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.460810 kubelet[2510]: W0421 10:38:57.460809 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.460848 kubelet[2510]: E0421 10:38:57.460814 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.462219 kubelet[2510]: E0421 10:38:57.462184 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.462219 kubelet[2510]: W0421 10:38:57.462212 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.462267 kubelet[2510]: E0421 10:38:57.462226 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.462583 kubelet[2510]: E0421 10:38:57.462552 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.462583 kubelet[2510]: W0421 10:38:57.462574 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.462620 kubelet[2510]: E0421 10:38:57.462585 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.462905 kubelet[2510]: E0421 10:38:57.462884 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.462932 kubelet[2510]: W0421 10:38:57.462906 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.462932 kubelet[2510]: E0421 10:38:57.462917 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.463200 kubelet[2510]: E0421 10:38:57.463184 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.463200 kubelet[2510]: W0421 10:38:57.463199 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.463235 kubelet[2510]: E0421 10:38:57.463205 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.463390 kubelet[2510]: E0421 10:38:57.463373 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.463390 kubelet[2510]: W0421 10:38:57.463388 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.463426 kubelet[2510]: E0421 10:38:57.463394 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.463629 kubelet[2510]: E0421 10:38:57.463612 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.463629 kubelet[2510]: W0421 10:38:57.463627 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.463662 kubelet[2510]: E0421 10:38:57.463632 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.464018 kubelet[2510]: E0421 10:38:57.463984 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.464018 kubelet[2510]: W0421 10:38:57.464004 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.464018 kubelet[2510]: E0421 10:38:57.464011 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.464283 kubelet[2510]: E0421 10:38:57.464252 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.464283 kubelet[2510]: W0421 10:38:57.464270 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.464283 kubelet[2510]: E0421 10:38:57.464276 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.464459 kubelet[2510]: E0421 10:38:57.464440 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.464459 kubelet[2510]: W0421 10:38:57.464455 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.464492 kubelet[2510]: E0421 10:38:57.464461 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.464635 kubelet[2510]: E0421 10:38:57.464617 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.464635 kubelet[2510]: W0421 10:38:57.464632 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.464669 kubelet[2510]: E0421 10:38:57.464637 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.464854 kubelet[2510]: E0421 10:38:57.464836 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.464854 kubelet[2510]: W0421 10:38:57.464851 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.464894 kubelet[2510]: E0421 10:38:57.464856 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.465027 kubelet[2510]: E0421 10:38:57.465010 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.465027 kubelet[2510]: W0421 10:38:57.465025 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.465060 kubelet[2510]: E0421 10:38:57.465030 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.465246 kubelet[2510]: E0421 10:38:57.465230 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.465246 kubelet[2510]: W0421 10:38:57.465244 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.465279 kubelet[2510]: E0421 10:38:57.465250 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.465596 kubelet[2510]: E0421 10:38:57.465557 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.465596 kubelet[2510]: W0421 10:38:57.465585 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.465630 kubelet[2510]: E0421 10:38:57.465595 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.465948 kubelet[2510]: E0421 10:38:57.465915 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.465948 kubelet[2510]: W0421 10:38:57.465939 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.465986 kubelet[2510]: E0421 10:38:57.465948 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.466214 kubelet[2510]: E0421 10:38:57.466180 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.466214 kubelet[2510]: W0421 10:38:57.466205 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.466251 kubelet[2510]: E0421 10:38:57.466213 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.466540 kubelet[2510]: E0421 10:38:57.466519 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.466558 kubelet[2510]: W0421 10:38:57.466538 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.466558 kubelet[2510]: E0421 10:38:57.466547 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:57.466733 kubelet[2510]: E0421 10:38:57.466715 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:38:57.466733 kubelet[2510]: W0421 10:38:57.466729 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:38:57.466804 kubelet[2510]: E0421 10:38:57.466735 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:38:58.024464 containerd[1462]: time="2026-04-21T10:38:58.024403279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:58.025123 containerd[1462]: time="2026-04-21T10:38:58.025064899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:38:58.026452 containerd[1462]: time="2026-04-21T10:38:58.026413005Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:58.028257 containerd[1462]: time="2026-04-21T10:38:58.028222843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:38:58.029010 containerd[1462]: time="2026-04-21T10:38:58.028955320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.632752754s" Apr 21 10:38:58.029039 containerd[1462]: time="2026-04-21T10:38:58.029008502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:38:58.033376 containerd[1462]: time="2026-04-21T10:38:58.033268428Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:38:58.047806 containerd[1462]: time="2026-04-21T10:38:58.047734150Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714\"" Apr 21 10:38:58.048444 containerd[1462]: time="2026-04-21T10:38:58.048411637Z" level=info msg="StartContainer for \"03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714\"" Apr 21 10:38:58.080948 systemd[1]: Started cri-containerd-03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714.scope - libcontainer container 03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714. Apr 21 10:38:58.106496 containerd[1462]: time="2026-04-21T10:38:58.106460279Z" level=info msg="StartContainer for \"03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714\" returns successfully" Apr 21 10:38:58.115496 systemd[1]: cri-containerd-03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714.scope: Deactivated successfully. Apr 21 10:38:58.211066 containerd[1462]: time="2026-04-21T10:38:58.210968515Z" level=info msg="shim disconnected" id=03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714 namespace=k8s.io Apr 21 10:38:58.211066 containerd[1462]: time="2026-04-21T10:38:58.211032497Z" level=warning msg="cleaning up after shim disconnected" id=03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714 namespace=k8s.io Apr 21 10:38:58.211066 containerd[1462]: time="2026-04-21T10:38:58.211039737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:38:58.334395 kubelet[2510]: E0421 10:38:58.334249 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:38:58.396084 kubelet[2510]: I0421 10:38:58.395979 2510 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:38:58.397262 kubelet[2510]: E0421 10:38:58.396355 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:38:58.397293 containerd[1462]: time="2026-04-21T10:38:58.396169159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:38:58.401469 systemd[1]: run-containerd-runc-k8s.io-03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714-runc.rXxRJ3.mount: Deactivated successfully. Apr 21 10:38:58.401693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03d7d30c1ba73fecba44a457c6cabd3644b0d79d126806675cc8aa5e8e5bb714-rootfs.mount: Deactivated successfully. Apr 21 10:38:58.413100 kubelet[2510]: I0421 10:38:58.412637 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-f96fbbbd5-jcnr4" podStartSLOduration=3.235834053 podStartE2EDuration="6.412615168s" podCreationTimestamp="2026-04-21 10:38:52 +0000 UTC" firstStartedPulling="2026-04-21 10:38:53.219208823 +0000 UTC m=+16.972540987" lastFinishedPulling="2026-04-21 10:38:56.395989938 +0000 UTC m=+20.149322102" observedRunningTime="2026-04-21 10:38:57.402256806 +0000 UTC m=+21.155588970" watchObservedRunningTime="2026-04-21 10:38:58.412615168 +0000 UTC m=+22.165947380" Apr 21 10:39:00.333421 kubelet[2510]: E0421 10:39:00.333351 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:01.408581 update_engine[1443]: I20260421 10:39:01.408452 1443 update_attempter.cc:509] Updating boot flags... Apr 21 10:39:01.429808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3257) Apr 21 10:39:01.463883 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3261) Apr 21 10:39:02.333307 kubelet[2510]: E0421 10:39:02.333186 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:04.334160 kubelet[2510]: E0421 10:39:04.334044 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:05.747979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756600099.mount: Deactivated successfully. Apr 21 10:39:05.983046 containerd[1462]: time="2026-04-21T10:39:05.982848893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:39:05.983046 containerd[1462]: time="2026-04-21T10:39:05.982976601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:05.984729 containerd[1462]: time="2026-04-21T10:39:05.984583284Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:05.987563 containerd[1462]: time="2026-04-21T10:39:05.987485352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:05.988023 containerd[1462]: time="2026-04-21T10:39:05.987983800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.591785464s" Apr 21 10:39:05.988023 containerd[1462]: time="2026-04-21T10:39:05.988023837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:39:05.994412 containerd[1462]: time="2026-04-21T10:39:05.994331830Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:39:06.005256 containerd[1462]: time="2026-04-21T10:39:06.005166688Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5\"" Apr 21 10:39:06.006167 containerd[1462]: time="2026-04-21T10:39:06.005732944Z" level=info msg="StartContainer for \"9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5\"" Apr 21 10:39:06.040943 systemd[1]: Started cri-containerd-9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5.scope - libcontainer container 9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5. Apr 21 10:39:06.068289 containerd[1462]: time="2026-04-21T10:39:06.068247823Z" level=info msg="StartContainer for \"9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5\" returns successfully" Apr 21 10:39:06.109452 systemd[1]: cri-containerd-9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5.scope: Deactivated successfully. Apr 21 10:39:06.163193 containerd[1462]: time="2026-04-21T10:39:06.163092332Z" level=info msg="shim disconnected" id=9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5 namespace=k8s.io Apr 21 10:39:06.163193 containerd[1462]: time="2026-04-21T10:39:06.163179746Z" level=warning msg="cleaning up after shim disconnected" id=9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5 namespace=k8s.io Apr 21 10:39:06.163193 containerd[1462]: time="2026-04-21T10:39:06.163188362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:39:06.334383 kubelet[2510]: E0421 10:39:06.334350 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:06.410980 containerd[1462]: time="2026-04-21T10:39:06.410590001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:39:06.748607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1677163855ef5993cc141a7d73c9d97579cd5ff0beae05a6f03025fb362df5-rootfs.mount: Deactivated successfully. Apr 21 10:39:08.333704 kubelet[2510]: E0421 10:39:08.333576 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:10.335335 kubelet[2510]: E0421 10:39:10.335284 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:10.987635 containerd[1462]: time="2026-04-21T10:39:10.987566682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:10.988339 containerd[1462]: time="2026-04-21T10:39:10.988297578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:39:10.989492 containerd[1462]: time="2026-04-21T10:39:10.989432531Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:10.992062 containerd[1462]: time="2026-04-21T10:39:10.991910085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:10.992548 containerd[1462]: time="2026-04-21T10:39:10.992439420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.581819369s" Apr 21 10:39:10.992548 containerd[1462]: time="2026-04-21T10:39:10.992466957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:39:10.998589 containerd[1462]: time="2026-04-21T10:39:10.998518838Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:39:11.010206 containerd[1462]: time="2026-04-21T10:39:11.010122046Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f\"" Apr 21 10:39:11.010901 containerd[1462]: time="2026-04-21T10:39:11.010584065Z" level=info msg="StartContainer for \"526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f\"" Apr 21 10:39:11.052943 systemd[1]: Started cri-containerd-526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f.scope - libcontainer container 526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f. Apr 21 10:39:11.073896 containerd[1462]: time="2026-04-21T10:39:11.073810423Z" level=info msg="StartContainer for \"526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f\" returns successfully" Apr 21 10:39:11.507957 systemd[1]: cri-containerd-526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f.scope: Deactivated successfully. Apr 21 10:39:11.529502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f-rootfs.mount: Deactivated successfully. Apr 21 10:39:11.551890 containerd[1462]: time="2026-04-21T10:39:11.551743321Z" level=info msg="shim disconnected" id=526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f namespace=k8s.io Apr 21 10:39:11.551890 containerd[1462]: time="2026-04-21T10:39:11.551839817Z" level=warning msg="cleaning up after shim disconnected" id=526fbc6d56a20c4718907fe3b6ca260aefb024db38807f48a46347a72061fa4f namespace=k8s.io Apr 21 10:39:11.551890 containerd[1462]: time="2026-04-21T10:39:11.551846815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:39:11.580962 kubelet[2510]: I0421 10:39:11.580913 2510 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:39:11.621326 systemd[1]: Created slice kubepods-burstable-pod0bb43cc7_550a_4d6f_af40_1ccdb9ab522e.slice - libcontainer container kubepods-burstable-pod0bb43cc7_550a_4d6f_af40_1ccdb9ab522e.slice. Apr 21 10:39:11.628172 systemd[1]: Created slice kubepods-besteffort-pod8e8005a6_289b_49cd_bea4_c17d23bb38fa.slice - libcontainer container kubepods-besteffort-pod8e8005a6_289b_49cd_bea4_c17d23bb38fa.slice. Apr 21 10:39:11.633033 systemd[1]: Created slice kubepods-besteffort-pod5f329c28_cbdf_4ba9_ae13_ffbe6877b9d4.slice - libcontainer container kubepods-besteffort-pod5f329c28_cbdf_4ba9_ae13_ffbe6877b9d4.slice. Apr 21 10:39:11.640285 systemd[1]: Created slice kubepods-burstable-pod07a1844c_5aff_41d5_92cc_1b454992a7e4.slice - libcontainer container kubepods-burstable-pod07a1844c_5aff_41d5_92cc_1b454992a7e4.slice. Apr 21 10:39:11.645971 systemd[1]: Created slice kubepods-besteffort-pod06d1d10d_8e0d_411e_9e6d_45fe89536c14.slice - libcontainer container kubepods-besteffort-pod06d1d10d_8e0d_411e_9e6d_45fe89536c14.slice. Apr 21 10:39:11.649949 systemd[1]: Created slice kubepods-besteffort-podb7c9590f_6caa_4345_9c90_9f8d06102e17.slice - libcontainer container kubepods-besteffort-podb7c9590f_6caa_4345_9c90_9f8d06102e17.slice. Apr 21 10:39:11.658282 systemd[1]: Created slice kubepods-besteffort-podb7a1fd50_1ed4_4589_8953_65abd596d417.slice - libcontainer container kubepods-besteffort-podb7a1fd50_1ed4_4589_8953_65abd596d417.slice. Apr 21 10:39:11.663207 kubelet[2510]: I0421 10:39:11.663175 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-backend-key-pair\") pod \"whisker-fb5f4844c-k6m8p\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " pod="calico-system/whisker-fb5f4844c-k6m8p" Apr 21 10:39:11.663207 kubelet[2510]: I0421 10:39:11.663209 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dggn\" (UniqueName: \"kubernetes.io/projected/b7c9590f-6caa-4345-9c90-9f8d06102e17-kube-api-access-9dggn\") pod \"calico-apiserver-6d96d7bfc7-92vsq\" (UID: \"b7c9590f-6caa-4345-9c90-9f8d06102e17\") " pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" Apr 21 10:39:11.663371 kubelet[2510]: I0421 10:39:11.663223 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e8005a6-289b-49cd-bea4-c17d23bb38fa-calico-apiserver-certs\") pod \"calico-apiserver-6d96d7bfc7-rtbsk\" (UID: \"8e8005a6-289b-49cd-bea4-c17d23bb38fa\") " pod="calico-system/calico-apiserver-6d96d7bfc7-rtbsk" Apr 21 10:39:11.663371 kubelet[2510]: I0421 10:39:11.663236 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ftk\" (UniqueName: \"kubernetes.io/projected/07a1844c-5aff-41d5-92cc-1b454992a7e4-kube-api-access-z7ftk\") pod \"coredns-7d764666f9-cqqqk\" (UID: \"07a1844c-5aff-41d5-92cc-1b454992a7e4\") " pod="kube-system/coredns-7d764666f9-cqqqk" Apr 21 10:39:11.663371 kubelet[2510]: I0421 10:39:11.663247 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07a1844c-5aff-41d5-92cc-1b454992a7e4-config-volume\") pod \"coredns-7d764666f9-cqqqk\" (UID: \"07a1844c-5aff-41d5-92cc-1b454992a7e4\") " pod="kube-system/coredns-7d764666f9-cqqqk" Apr 21 10:39:11.663371 kubelet[2510]: I0421 10:39:11.663259 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbjg\" (UniqueName: \"kubernetes.io/projected/0bb43cc7-550a-4d6f-af40-1ccdb9ab522e-kube-api-access-5rbjg\") pod \"coredns-7d764666f9-mqdmg\" (UID: \"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e\") " pod="kube-system/coredns-7d764666f9-mqdmg" Apr 21 10:39:11.663371 kubelet[2510]: I0421 10:39:11.663271 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdh7\" (UniqueName: \"kubernetes.io/projected/8e8005a6-289b-49cd-bea4-c17d23bb38fa-kube-api-access-wfdh7\") pod \"calico-apiserver-6d96d7bfc7-rtbsk\" (UID: \"8e8005a6-289b-49cd-bea4-c17d23bb38fa\") " pod="calico-system/calico-apiserver-6d96d7bfc7-rtbsk" Apr 21 10:39:11.663454 kubelet[2510]: I0421 10:39:11.663282 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7a1fd50-1ed4-4589-8953-65abd596d417-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-f2zn2\" (UID: \"b7a1fd50-1ed4-4589-8953-65abd596d417\") " pod="calico-system/goldmane-9f7667bb8-f2zn2" Apr 21 10:39:11.663454 kubelet[2510]: I0421 10:39:11.663292 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-ca-bundle\") pod \"whisker-fb5f4844c-k6m8p\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " pod="calico-system/whisker-fb5f4844c-k6m8p" Apr 21 10:39:11.663454 kubelet[2510]: I0421 10:39:11.663303 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a1fd50-1ed4-4589-8953-65abd596d417-config\") pod \"goldmane-9f7667bb8-f2zn2\" (UID: \"b7a1fd50-1ed4-4589-8953-65abd596d417\") " pod="calico-system/goldmane-9f7667bb8-f2zn2" Apr 21 10:39:11.663454 kubelet[2510]: I0421 10:39:11.663314 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4-tigera-ca-bundle\") pod \"calico-kube-controllers-6688bb788d-6t6qw\" (UID: \"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4\") " pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" Apr 21 10:39:11.663454 kubelet[2510]: I0421 10:39:11.663352 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7c9590f-6caa-4345-9c90-9f8d06102e17-calico-apiserver-certs\") pod \"calico-apiserver-6d96d7bfc7-92vsq\" (UID: \"b7c9590f-6caa-4345-9c90-9f8d06102e17\") " pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" Apr 21 10:39:11.663532 kubelet[2510]: I0421 10:39:11.663374 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb43cc7-550a-4d6f-af40-1ccdb9ab522e-config-volume\") pod \"coredns-7d764666f9-mqdmg\" (UID: \"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e\") " pod="kube-system/coredns-7d764666f9-mqdmg" Apr 21 10:39:11.663532 kubelet[2510]: I0421 10:39:11.663390 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2nqg\" (UniqueName: \"kubernetes.io/projected/06d1d10d-8e0d-411e-9e6d-45fe89536c14-kube-api-access-h2nqg\") pod \"whisker-fb5f4844c-k6m8p\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " pod="calico-system/whisker-fb5f4844c-k6m8p" Apr 21 10:39:11.663532 kubelet[2510]: I0421 10:39:11.663418 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6zj\" (UniqueName: \"kubernetes.io/projected/5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4-kube-api-access-gl6zj\") pod \"calico-kube-controllers-6688bb788d-6t6qw\" (UID: \"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4\") " pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" Apr 21 10:39:11.663532 kubelet[2510]: I0421 10:39:11.663432 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b7a1fd50-1ed4-4589-8953-65abd596d417-goldmane-key-pair\") pod \"goldmane-9f7667bb8-f2zn2\" (UID: \"b7a1fd50-1ed4-4589-8953-65abd596d417\") " pod="calico-system/goldmane-9f7667bb8-f2zn2" Apr 21 10:39:11.663532 kubelet[2510]: I0421 10:39:11.663443 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8plj\" (UniqueName: \"kubernetes.io/projected/b7a1fd50-1ed4-4589-8953-65abd596d417-kube-api-access-q8plj\") pod \"goldmane-9f7667bb8-f2zn2\" (UID: \"b7a1fd50-1ed4-4589-8953-65abd596d417\") " pod="calico-system/goldmane-9f7667bb8-f2zn2" Apr 21 10:39:11.663608 kubelet[2510]: I0421 10:39:11.663453 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-nginx-config\") pod \"whisker-fb5f4844c-k6m8p\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " pod="calico-system/whisker-fb5f4844c-k6m8p" Apr 21 10:39:11.929043 kubelet[2510]: E0421 10:39:11.928980 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:11.929719 containerd[1462]: time="2026-04-21T10:39:11.929636390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mqdmg,Uid:0bb43cc7-550a-4d6f-af40-1ccdb9ab522e,Namespace:kube-system,Attempt:0,}" Apr 21 10:39:11.939009 containerd[1462]: time="2026-04-21T10:39:11.938957640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6688bb788d-6t6qw,Uid:5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:11.946348 kubelet[2510]: E0421 10:39:11.946321 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:11.952963 containerd[1462]: time="2026-04-21T10:39:11.952542390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-rtbsk,Uid:8e8005a6-289b-49cd-bea4-c17d23bb38fa,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:11.952963 containerd[1462]: time="2026-04-21T10:39:11.952582514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb5f4844c-k6m8p,Uid:06d1d10d-8e0d-411e-9e6d-45fe89536c14,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:11.953072 containerd[1462]: time="2026-04-21T10:39:11.953007311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cqqqk,Uid:07a1844c-5aff-41d5-92cc-1b454992a7e4,Namespace:kube-system,Attempt:0,}" Apr 21 10:39:11.960003 containerd[1462]: time="2026-04-21T10:39:11.959952515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-92vsq,Uid:b7c9590f-6caa-4345-9c90-9f8d06102e17,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:11.966593 containerd[1462]: time="2026-04-21T10:39:11.966566804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-f2zn2,Uid:b7a1fd50-1ed4-4589-8953-65abd596d417,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:12.102044 containerd[1462]: time="2026-04-21T10:39:12.101991233Z" level=error msg="Failed to destroy network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.103608 containerd[1462]: time="2026-04-21T10:39:12.103432806Z" level=error msg="encountered an error cleaning up failed sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.103608 containerd[1462]: time="2026-04-21T10:39:12.103479493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cqqqk,Uid:07a1844c-5aff-41d5-92cc-1b454992a7e4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.103946 containerd[1462]: time="2026-04-21T10:39:12.103018124Z" level=error msg="Failed to destroy network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.104294 containerd[1462]: time="2026-04-21T10:39:12.104248517Z" level=error msg="encountered an error cleaning up failed sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.104329 containerd[1462]: time="2026-04-21T10:39:12.104302787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6688bb788d-6t6qw,Uid:5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.116549 containerd[1462]: time="2026-04-21T10:39:12.116483559Z" level=error msg="Failed to destroy network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.117542 containerd[1462]: time="2026-04-21T10:39:12.117099332Z" level=error msg="encountered an error cleaning up failed sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.117542 containerd[1462]: time="2026-04-21T10:39:12.117213099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mqdmg,Uid:0bb43cc7-550a-4d6f-af40-1ccdb9ab522e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.117887 containerd[1462]: time="2026-04-21T10:39:12.117679235Z" level=error msg="Failed to destroy network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.117887 containerd[1462]: time="2026-04-21T10:39:12.117846302Z" level=error msg="Failed to destroy network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.118492 containerd[1462]: time="2026-04-21T10:39:12.118417568Z" level=error msg="encountered an error cleaning up failed sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.118492 containerd[1462]: time="2026-04-21T10:39:12.118475350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-92vsq,Uid:b7c9590f-6caa-4345-9c90-9f8d06102e17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.118593 kubelet[2510]: E0421 10:39:12.118552 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.118648 kubelet[2510]: E0421 10:39:12.118608 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cqqqk" Apr 21 10:39:12.118648 kubelet[2510]: E0421 10:39:12.118624 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cqqqk" Apr 21 10:39:12.118688 kubelet[2510]: E0421 10:39:12.118671 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-cqqqk_kube-system(07a1844c-5aff-41d5-92cc-1b454992a7e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-cqqqk_kube-system(07a1844c-5aff-41d5-92cc-1b454992a7e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cqqqk" podUID="07a1844c-5aff-41d5-92cc-1b454992a7e4" Apr 21 10:39:12.118826 kubelet[2510]: E0421 10:39:12.118430 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.118922 kubelet[2510]: E0421 10:39:12.118832 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mqdmg" Apr 21 10:39:12.118922 kubelet[2510]: E0421 10:39:12.118848 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mqdmg" Apr 21 10:39:12.118922 kubelet[2510]: E0421 10:39:12.118885 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-mqdmg_kube-system(0bb43cc7-550a-4d6f-af40-1ccdb9ab522e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-mqdmg_kube-system(0bb43cc7-550a-4d6f-af40-1ccdb9ab522e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mqdmg" podUID="0bb43cc7-550a-4d6f-af40-1ccdb9ab522e" Apr 21 10:39:12.119086 kubelet[2510]: E0421 10:39:12.118922 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.119086 kubelet[2510]: E0421 10:39:12.118934 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" Apr 21 10:39:12.119086 kubelet[2510]: E0421 10:39:12.118944 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" Apr 21 10:39:12.119221 containerd[1462]: time="2026-04-21T10:39:12.118939314Z" level=error msg="encountered an error cleaning up failed sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.119221 containerd[1462]: time="2026-04-21T10:39:12.118971896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb5f4844c-k6m8p,Uid:06d1d10d-8e0d-411e-9e6d-45fe89536c14,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.119280 kubelet[2510]: E0421 10:39:12.118961 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6688bb788d-6t6qw_calico-system(5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6688bb788d-6t6qw_calico-system(5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" podUID="5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4" Apr 21 10:39:12.119280 kubelet[2510]: E0421 10:39:12.119007 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.119280 kubelet[2510]: E0421 10:39:12.119070 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" Apr 21 10:39:12.119438 kubelet[2510]: E0421 10:39:12.119079 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" Apr 21 10:39:12.119438 kubelet[2510]: E0421 10:39:12.119099 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d96d7bfc7-92vsq_calico-system(b7c9590f-6caa-4345-9c90-9f8d06102e17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d96d7bfc7-92vsq_calico-system(b7c9590f-6caa-4345-9c90-9f8d06102e17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" podUID="b7c9590f-6caa-4345-9c90-9f8d06102e17" Apr 21 10:39:12.119438 kubelet[2510]: E0421 10:39:12.119194 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.119566 kubelet[2510]: E0421 10:39:12.119243 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fb5f4844c-k6m8p" Apr 21 10:39:12.119566 kubelet[2510]: E0421 10:39:12.119260 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fb5f4844c-k6m8p" Apr 21 10:39:12.119566 kubelet[2510]: E0421 10:39:12.119322 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fb5f4844c-k6m8p_calico-system(06d1d10d-8e0d-411e-9e6d-45fe89536c14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fb5f4844c-k6m8p_calico-system(06d1d10d-8e0d-411e-9e6d-45fe89536c14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fb5f4844c-k6m8p" podUID="06d1d10d-8e0d-411e-9e6d-45fe89536c14" Apr 21 10:39:12.129813 containerd[1462]: time="2026-04-21T10:39:12.129679859Z" level=error msg="Failed to destroy network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.130061 containerd[1462]: time="2026-04-21T10:39:12.130029597Z" level=error msg="encountered an error cleaning up failed sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.130115 containerd[1462]: time="2026-04-21T10:39:12.130089461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-rtbsk,Uid:8e8005a6-289b-49cd-bea4-c17d23bb38fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.130327 kubelet[2510]: E0421 10:39:12.130292 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.130413 kubelet[2510]: E0421 10:39:12.130339 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d96d7bfc7-rtbsk" Apr 21 10:39:12.130413 kubelet[2510]: E0421 10:39:12.130357 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d96d7bfc7-rtbsk" Apr 21 10:39:12.130470 kubelet[2510]: E0421 10:39:12.130418 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d96d7bfc7-rtbsk_calico-system(8e8005a6-289b-49cd-bea4-c17d23bb38fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d96d7bfc7-rtbsk_calico-system(8e8005a6-289b-49cd-bea4-c17d23bb38fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d96d7bfc7-rtbsk" podUID="8e8005a6-289b-49cd-bea4-c17d23bb38fa" Apr 21 10:39:12.142638 containerd[1462]: time="2026-04-21T10:39:12.142588812Z" level=error msg="Failed to destroy network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.142917 containerd[1462]: time="2026-04-21T10:39:12.142875659Z" level=error msg="encountered an error cleaning up failed sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.142949 containerd[1462]: time="2026-04-21T10:39:12.142927954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-f2zn2,Uid:b7a1fd50-1ed4-4589-8953-65abd596d417,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.143243 kubelet[2510]: E0421 10:39:12.143199 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.143281 kubelet[2510]: E0421 10:39:12.143244 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-f2zn2" Apr 21 10:39:12.143281 kubelet[2510]: E0421 10:39:12.143260 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-f2zn2" Apr 21 10:39:12.143335 kubelet[2510]: E0421 10:39:12.143306 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-f2zn2_calico-system(b7a1fd50-1ed4-4589-8953-65abd596d417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-f2zn2_calico-system(b7a1fd50-1ed4-4589-8953-65abd596d417)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-f2zn2" podUID="b7a1fd50-1ed4-4589-8953-65abd596d417" Apr 21 10:39:12.338552 systemd[1]: Created slice kubepods-besteffort-pod9fdc5648_d90a_492e_8550_ef4cb967e14b.slice - libcontainer container kubepods-besteffort-pod9fdc5648_d90a_492e_8550_ef4cb967e14b.slice. Apr 21 10:39:12.343297 containerd[1462]: time="2026-04-21T10:39:12.343263730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshx6,Uid:9fdc5648-d90a-492e-8550-ef4cb967e14b,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:12.391661 containerd[1462]: time="2026-04-21T10:39:12.391608458Z" level=error msg="Failed to destroy network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.392019 containerd[1462]: time="2026-04-21T10:39:12.391969337Z" level=error msg="encountered an error cleaning up failed sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.392054 containerd[1462]: time="2026-04-21T10:39:12.392037453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshx6,Uid:9fdc5648-d90a-492e-8550-ef4cb967e14b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.392395 kubelet[2510]: E0421 10:39:12.392334 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.392435 kubelet[2510]: E0421 10:39:12.392411 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kshx6" Apr 21 10:39:12.392435 kubelet[2510]: E0421 10:39:12.392425 2510 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kshx6" Apr 21 10:39:12.392512 kubelet[2510]: E0421 10:39:12.392485 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kshx6_calico-system(9fdc5648-d90a-492e-8550-ef4cb967e14b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kshx6_calico-system(9fdc5648-d90a-492e-8550-ef4cb967e14b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:12.423585 kubelet[2510]: I0421 10:39:12.423545 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:12.424806 kubelet[2510]: I0421 10:39:12.424729 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:12.425886 kubelet[2510]: I0421 10:39:12.425861 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:12.426687 kubelet[2510]: I0421 10:39:12.426674 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:12.433524 containerd[1462]: time="2026-04-21T10:39:12.433423388Z" level=info msg="StopPodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\"" Apr 21 10:39:12.433700 containerd[1462]: time="2026-04-21T10:39:12.433645774Z" level=info msg="StopPodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\"" Apr 21 10:39:12.434626 containerd[1462]: time="2026-04-21T10:39:12.433997177Z" level=info msg="StopPodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\"" Apr 21 10:39:12.434626 containerd[1462]: time="2026-04-21T10:39:12.434034489Z" level=info msg="StopPodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\"" Apr 21 10:39:12.434734 containerd[1462]: time="2026-04-21T10:39:12.434647597Z" level=info msg="Ensure that sandbox 99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46 in task-service has been cleanup successfully" Apr 21 10:39:12.434734 containerd[1462]: time="2026-04-21T10:39:12.434659515Z" level=info msg="Ensure that sandbox 03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6 in task-service has been cleanup successfully" Apr 21 10:39:12.434865 containerd[1462]: time="2026-04-21T10:39:12.434824886Z" level=info msg="Ensure that sandbox 74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589 in task-service has been cleanup successfully" Apr 21 10:39:12.436312 containerd[1462]: time="2026-04-21T10:39:12.434857900Z" level=info msg="Ensure that sandbox 7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb in task-service has been cleanup successfully" Apr 21 10:39:12.445677 kubelet[2510]: I0421 10:39:12.445560 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:12.448064 containerd[1462]: time="2026-04-21T10:39:12.448030230Z" level=info msg="StopPodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\"" Apr 21 10:39:12.448221 containerd[1462]: time="2026-04-21T10:39:12.448181508Z" level=info msg="Ensure that sandbox 93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5 in task-service has been cleanup successfully" Apr 21 10:39:12.449420 kubelet[2510]: I0421 10:39:12.449311 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:12.449728 containerd[1462]: time="2026-04-21T10:39:12.449621815Z" level=info msg="StopPodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\"" Apr 21 10:39:12.449844 containerd[1462]: time="2026-04-21T10:39:12.449729137Z" level=info msg="Ensure that sandbox b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a in task-service has been cleanup successfully" Apr 21 10:39:12.453224 kubelet[2510]: I0421 10:39:12.453198 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:12.453644 containerd[1462]: time="2026-04-21T10:39:12.453628240Z" level=info msg="StopPodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\"" Apr 21 10:39:12.454101 containerd[1462]: time="2026-04-21T10:39:12.454086072Z" level=info msg="Ensure that sandbox 713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2 in task-service has been cleanup successfully" Apr 21 10:39:12.455661 kubelet[2510]: I0421 10:39:12.455467 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:12.456135 containerd[1462]: time="2026-04-21T10:39:12.456121540Z" level=info msg="StopPodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\"" Apr 21 10:39:12.457274 containerd[1462]: time="2026-04-21T10:39:12.457222402Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:39:12.457796 containerd[1462]: time="2026-04-21T10:39:12.457740333Z" level=info msg="Ensure that sandbox 32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8 in task-service has been cleanup successfully" Apr 21 10:39:12.491728 containerd[1462]: time="2026-04-21T10:39:12.491694245Z" level=info msg="CreateContainer within sandbox \"a90f8a84a2e6b2417d62b1325d71c963b519ef1f378975baa2f67826b9a2a831\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"68dc7eab51d39f7d72fa3331caa2fe8da4760981ddea3dec42a63517a30cb531\"" Apr 21 10:39:12.493348 containerd[1462]: time="2026-04-21T10:39:12.493166107Z" level=info msg="StartContainer for \"68dc7eab51d39f7d72fa3331caa2fe8da4760981ddea3dec42a63517a30cb531\"" Apr 21 10:39:12.502041 containerd[1462]: time="2026-04-21T10:39:12.502013031Z" level=error msg="StopPodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" failed" error="failed to destroy network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.502451 kubelet[2510]: E0421 10:39:12.502414 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:12.502535 kubelet[2510]: E0421 10:39:12.502456 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589"} Apr 21 10:39:12.502535 kubelet[2510]: E0421 10:39:12.502501 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.502535 kubelet[2510]: E0421 10:39:12.502522 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fb5f4844c-k6m8p" podUID="06d1d10d-8e0d-411e-9e6d-45fe89536c14" Apr 21 10:39:12.510196 containerd[1462]: time="2026-04-21T10:39:12.510103536Z" level=error msg="StopPodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" failed" error="failed to destroy network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.510695 kubelet[2510]: E0421 10:39:12.510546 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:12.510695 kubelet[2510]: E0421 10:39:12.510582 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a"} Apr 21 10:39:12.510695 kubelet[2510]: E0421 10:39:12.510608 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07a1844c-5aff-41d5-92cc-1b454992a7e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.510695 kubelet[2510]: E0421 10:39:12.510631 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07a1844c-5aff-41d5-92cc-1b454992a7e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cqqqk" podUID="07a1844c-5aff-41d5-92cc-1b454992a7e4" Apr 21 10:39:12.512288 containerd[1462]: time="2026-04-21T10:39:12.512241671Z" level=error msg="StopPodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" failed" error="failed to destroy network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.512625 kubelet[2510]: E0421 10:39:12.512500 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:12.512625 kubelet[2510]: E0421 10:39:12.512565 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb"} Apr 21 10:39:12.512625 kubelet[2510]: E0421 10:39:12.512583 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fdc5648-d90a-492e-8550-ef4cb967e14b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.512625 kubelet[2510]: E0421 10:39:12.512602 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fdc5648-d90a-492e-8550-ef4cb967e14b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kshx6" podUID="9fdc5648-d90a-492e-8550-ef4cb967e14b" Apr 21 10:39:12.514749 containerd[1462]: time="2026-04-21T10:39:12.514690897Z" level=error msg="StopPodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" failed" error="failed to destroy network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.514962 kubelet[2510]: E0421 10:39:12.514945 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:12.515081 kubelet[2510]: E0421 10:39:12.515024 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5"} Apr 21 10:39:12.515081 kubelet[2510]: E0421 10:39:12.515044 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7a1fd50-1ed4-4589-8953-65abd596d417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.515081 kubelet[2510]: E0421 10:39:12.515060 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7a1fd50-1ed4-4589-8953-65abd596d417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-f2zn2" podUID="b7a1fd50-1ed4-4589-8953-65abd596d417" Apr 21 10:39:12.517294 containerd[1462]: time="2026-04-21T10:39:12.517238651Z" level=error msg="StopPodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" failed" error="failed to destroy network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.518334 kubelet[2510]: E0421 10:39:12.518273 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:12.518334 kubelet[2510]: E0421 10:39:12.518296 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6"} Apr 21 10:39:12.518334 kubelet[2510]: E0421 10:39:12.518313 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e8005a6-289b-49cd-bea4-c17d23bb38fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.518334 kubelet[2510]: E0421 10:39:12.518329 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e8005a6-289b-49cd-bea4-c17d23bb38fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d96d7bfc7-rtbsk" podUID="8e8005a6-289b-49cd-bea4-c17d23bb38fa" Apr 21 10:39:12.518868 containerd[1462]: time="2026-04-21T10:39:12.518725560Z" level=error msg="StopPodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" failed" error="failed to destroy network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.519004 kubelet[2510]: E0421 10:39:12.518905 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:12.519004 kubelet[2510]: E0421 10:39:12.518969 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46"} Apr 21 10:39:12.519004 kubelet[2510]: E0421 10:39:12.518987 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7c9590f-6caa-4345-9c90-9f8d06102e17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.519177 kubelet[2510]: E0421 10:39:12.519003 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7c9590f-6caa-4345-9c90-9f8d06102e17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" podUID="b7c9590f-6caa-4345-9c90-9f8d06102e17" Apr 21 10:39:12.523371 containerd[1462]: time="2026-04-21T10:39:12.523302908Z" level=error msg="StopPodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" failed" error="failed to destroy network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.523554 kubelet[2510]: E0421 10:39:12.523521 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:12.523583 kubelet[2510]: E0421 10:39:12.523559 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2"} Apr 21 10:39:12.523603 kubelet[2510]: E0421 10:39:12.523588 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.523663 kubelet[2510]: E0421 10:39:12.523603 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" podUID="5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4" Apr 21 10:39:12.524227 containerd[1462]: time="2026-04-21T10:39:12.524120159Z" level=error msg="StopPodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" failed" error="failed to destroy network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:39:12.524393 kubelet[2510]: E0421 10:39:12.524359 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:12.524444 kubelet[2510]: E0421 10:39:12.524397 2510 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8"} Apr 21 10:39:12.524444 kubelet[2510]: E0421 10:39:12.524417 2510 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:39:12.524444 kubelet[2510]: E0421 10:39:12.524433 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mqdmg" podUID="0bb43cc7-550a-4d6f-af40-1ccdb9ab522e" Apr 21 10:39:12.535961 systemd[1]: Started cri-containerd-68dc7eab51d39f7d72fa3331caa2fe8da4760981ddea3dec42a63517a30cb531.scope - libcontainer container 68dc7eab51d39f7d72fa3331caa2fe8da4760981ddea3dec42a63517a30cb531. Apr 21 10:39:12.559555 containerd[1462]: time="2026-04-21T10:39:12.559434350Z" level=info msg="StartContainer for \"68dc7eab51d39f7d72fa3331caa2fe8da4760981ddea3dec42a63517a30cb531\" returns successfully" Apr 21 10:39:13.011489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46-shm.mount: Deactivated successfully. Apr 21 10:39:13.011617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589-shm.mount: Deactivated successfully. Apr 21 10:39:13.011686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a-shm.mount: Deactivated successfully. Apr 21 10:39:13.011814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6-shm.mount: Deactivated successfully. Apr 21 10:39:13.011892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2-shm.mount: Deactivated successfully. Apr 21 10:39:13.011954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8-shm.mount: Deactivated successfully. Apr 21 10:39:13.460331 containerd[1462]: time="2026-04-21T10:39:13.460284833Z" level=info msg="StopPodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\"" Apr 21 10:39:13.488474 kubelet[2510]: I0421 10:39:13.488367 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-d5t6r" podStartSLOduration=2.267406999 podStartE2EDuration="21.488357329s" podCreationTimestamp="2026-04-21 10:38:52 +0000 UTC" firstStartedPulling="2026-04-21 10:38:53.22390533 +0000 UTC m=+16.977237495" lastFinishedPulling="2026-04-21 10:39:12.444855662 +0000 UTC m=+36.198187825" observedRunningTime="2026-04-21 10:39:13.488286552 +0000 UTC m=+37.241618727" watchObservedRunningTime="2026-04-21 10:39:13.488357329 +0000 UTC m=+37.241689504" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.508 [INFO][3898] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.508 [INFO][3898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" iface="eth0" netns="/var/run/netns/cni-6fff47a7-1eee-3a7d-89d9-4886418516f0" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.508 [INFO][3898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" iface="eth0" netns="/var/run/netns/cni-6fff47a7-1eee-3a7d-89d9-4886418516f0" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.508 [INFO][3898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" iface="eth0" netns="/var/run/netns/cni-6fff47a7-1eee-3a7d-89d9-4886418516f0" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.509 [INFO][3898] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.509 [INFO][3898] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.527 [INFO][3907] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.527 [INFO][3907] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.527 [INFO][3907] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.534 [WARNING][3907] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.534 [INFO][3907] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.536 [INFO][3907] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:13.540919 containerd[1462]: 2026-04-21 10:39:13.539 [INFO][3898] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:13.542972 containerd[1462]: time="2026-04-21T10:39:13.541956483Z" level=info msg="TearDown network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" successfully" Apr 21 10:39:13.542972 containerd[1462]: time="2026-04-21T10:39:13.542019351Z" level=info msg="StopPodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" returns successfully" Apr 21 10:39:13.542965 systemd[1]: run-netns-cni\x2d6fff47a7\x2d1eee\x2d3a7d\x2d89d9\x2d4886418516f0.mount: Deactivated successfully. Apr 21 10:39:13.675569 kubelet[2510]: I0421 10:39:13.675466 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-nginx-config\" (UniqueName: \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-nginx-config\") pod \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " Apr 21 10:39:13.675569 kubelet[2510]: I0421 10:39:13.675526 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-backend-key-pair\") pod \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " Apr 21 10:39:13.675569 kubelet[2510]: I0421 10:39:13.675548 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-ca-bundle\") pod \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " Apr 21 10:39:13.675569 kubelet[2510]: I0421 10:39:13.675566 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/06d1d10d-8e0d-411e-9e6d-45fe89536c14-kube-api-access-h2nqg\" (UniqueName: \"kubernetes.io/projected/06d1d10d-8e0d-411e-9e6d-45fe89536c14-kube-api-access-h2nqg\") pod \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\" (UID: \"06d1d10d-8e0d-411e-9e6d-45fe89536c14\") " Apr 21 10:39:13.676174 kubelet[2510]: I0421 10:39:13.676103 2510 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-ca-bundle" pod "06d1d10d-8e0d-411e-9e6d-45fe89536c14" (UID: "06d1d10d-8e0d-411e-9e6d-45fe89536c14"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:39:13.676407 kubelet[2510]: I0421 10:39:13.676356 2510 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-nginx-config" pod "06d1d10d-8e0d-411e-9e6d-45fe89536c14" (UID: "06d1d10d-8e0d-411e-9e6d-45fe89536c14"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:39:13.679295 kubelet[2510]: I0421 10:39:13.679246 2510 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-backend-key-pair" pod "06d1d10d-8e0d-411e-9e6d-45fe89536c14" (UID: "06d1d10d-8e0d-411e-9e6d-45fe89536c14"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:39:13.679341 kubelet[2510]: I0421 10:39:13.679249 2510 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d1d10d-8e0d-411e-9e6d-45fe89536c14-kube-api-access-h2nqg" pod "06d1d10d-8e0d-411e-9e6d-45fe89536c14" (UID: "06d1d10d-8e0d-411e-9e6d-45fe89536c14"). InnerVolumeSpecName "kube-api-access-h2nqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:39:13.680587 systemd[1]: var-lib-kubelet-pods-06d1d10d\x2d8e0d\x2d411e\x2d9e6d\x2d45fe89536c14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2nqg.mount: Deactivated successfully. Apr 21 10:39:13.680684 systemd[1]: var-lib-kubelet-pods-06d1d10d\x2d8e0d\x2d411e\x2d9e6d\x2d45fe89536c14-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:39:13.727964 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Apr 21 10:39:13.777139 kubelet[2510]: I0421 10:39:13.776860 2510 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 21 10:39:13.777139 kubelet[2510]: I0421 10:39:13.776883 2510 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 21 10:39:13.777139 kubelet[2510]: I0421 10:39:13.776892 2510 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d1d10d-8e0d-411e-9e6d-45fe89536c14-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 21 10:39:13.777139 kubelet[2510]: I0421 10:39:13.776898 2510 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2nqg\" (UniqueName: \"kubernetes.io/projected/06d1d10d-8e0d-411e-9e6d-45fe89536c14-kube-api-access-h2nqg\") on node \"localhost\" DevicePath \"\"" Apr 21 10:39:13.787063 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:13.792119 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:13.800739 systemd-logind[1440]: New session 8 of user core. Apr 21 10:39:13.805925 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:39:14.001283 sshd[3917]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:14.005304 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:39:14.005734 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:37772.service: Deactivated successfully. Apr 21 10:39:14.010425 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:39:14.012808 systemd-logind[1440]: Removed session 8. Apr 21 10:39:14.339222 systemd[1]: Removed slice kubepods-besteffort-pod06d1d10d_8e0d_411e_9e6d_45fe89536c14.slice - libcontainer container kubepods-besteffort-pod06d1d10d_8e0d_411e_9e6d_45fe89536c14.slice. Apr 21 10:39:14.514987 systemd[1]: Created slice kubepods-besteffort-pod9228a264_b95c_4bee_b7ea_267a9554ce11.slice - libcontainer container kubepods-besteffort-pod9228a264_b95c_4bee_b7ea_267a9554ce11.slice. Apr 21 10:39:14.582659 kubelet[2510]: I0421 10:39:14.582133 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9228a264-b95c-4bee-b7ea-267a9554ce11-whisker-ca-bundle\") pod \"whisker-594587f7f6-hcwfc\" (UID: \"9228a264-b95c-4bee-b7ea-267a9554ce11\") " pod="calico-system/whisker-594587f7f6-hcwfc" Apr 21 10:39:14.582659 kubelet[2510]: I0421 10:39:14.582195 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9228a264-b95c-4bee-b7ea-267a9554ce11-whisker-backend-key-pair\") pod \"whisker-594587f7f6-hcwfc\" (UID: \"9228a264-b95c-4bee-b7ea-267a9554ce11\") " pod="calico-system/whisker-594587f7f6-hcwfc" Apr 21 10:39:14.582659 kubelet[2510]: I0421 10:39:14.582209 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9228a264-b95c-4bee-b7ea-267a9554ce11-nginx-config\") pod \"whisker-594587f7f6-hcwfc\" (UID: \"9228a264-b95c-4bee-b7ea-267a9554ce11\") " pod="calico-system/whisker-594587f7f6-hcwfc" Apr 21 10:39:14.582659 kubelet[2510]: I0421 10:39:14.582247 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mkr\" (UniqueName: \"kubernetes.io/projected/9228a264-b95c-4bee-b7ea-267a9554ce11-kube-api-access-b4mkr\") pod \"whisker-594587f7f6-hcwfc\" (UID: \"9228a264-b95c-4bee-b7ea-267a9554ce11\") " pod="calico-system/whisker-594587f7f6-hcwfc" Apr 21 10:39:14.822367 containerd[1462]: time="2026-04-21T10:39:14.822321358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-594587f7f6-hcwfc,Uid:9228a264-b95c-4bee-b7ea-267a9554ce11,Namespace:calico-system,Attempt:0,}" Apr 21 10:39:14.930575 systemd-networkd[1378]: caliecb3a4a5152: Link UP Apr 21 10:39:14.930803 systemd-networkd[1378]: caliecb3a4a5152: Gained carrier Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.854 [ERROR][4068] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.865 [INFO][4068] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--594587f7f6--hcwfc-eth0 whisker-594587f7f6- calico-system 9228a264-b95c-4bee-b7ea-267a9554ce11 954 0 2026-04-21 10:39:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:594587f7f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-594587f7f6-hcwfc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliecb3a4a5152 [] [] }} ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.865 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.887 [INFO][4083] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" HandleID="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Workload="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.893 [INFO][4083] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" HandleID="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Workload="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fda0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-594587f7f6-hcwfc", "timestamp":"2026-04-21 10:39:14.887208491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00043e580)} Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.893 [INFO][4083] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.893 [INFO][4083] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.893 [INFO][4083] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.897 [INFO][4083] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.901 [INFO][4083] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.905 [INFO][4083] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.907 [INFO][4083] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.909 [INFO][4083] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.909 [INFO][4083] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.911 [INFO][4083] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502 Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.914 [INFO][4083] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.918 [INFO][4083] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.919 [INFO][4083] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" host="localhost" Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.919 [INFO][4083] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:14.941678 containerd[1462]: 2026-04-21 10:39:14.919 [INFO][4083] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" HandleID="k8s-pod-network.d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Workload="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.942727 containerd[1462]: 2026-04-21 10:39:14.921 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--594587f7f6--hcwfc-eth0", GenerateName:"whisker-594587f7f6-", Namespace:"calico-system", SelfLink:"", UID:"9228a264-b95c-4bee-b7ea-267a9554ce11", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 39, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"594587f7f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-594587f7f6-hcwfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliecb3a4a5152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:14.942727 containerd[1462]: 2026-04-21 10:39:14.921 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.942727 containerd[1462]: 2026-04-21 10:39:14.921 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecb3a4a5152 ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.942727 containerd[1462]: 2026-04-21 10:39:14.930 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.942727 containerd[1462]: 2026-04-21 10:39:14.930 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--594587f7f6--hcwfc-eth0", GenerateName:"whisker-594587f7f6-", Namespace:"calico-system", SelfLink:"", UID:"9228a264-b95c-4bee-b7ea-267a9554ce11", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 39, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"594587f7f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502", Pod:"whisker-594587f7f6-hcwfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliecb3a4a5152", MAC:"86:5c:6b:48:7a:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:14.942727 containerd[1462]: 2026-04-21 10:39:14.939 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502" Namespace="calico-system" Pod="whisker-594587f7f6-hcwfc" WorkloadEndpoint="localhost-k8s-whisker--594587f7f6--hcwfc-eth0" Apr 21 10:39:14.964138 containerd[1462]: time="2026-04-21T10:39:14.964056088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:14.964138 containerd[1462]: time="2026-04-21T10:39:14.964106065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:14.964138 containerd[1462]: time="2026-04-21T10:39:14.964118418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:14.964305 containerd[1462]: time="2026-04-21T10:39:14.964192020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:14.981990 systemd[1]: Started cri-containerd-d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502.scope - libcontainer container d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502. Apr 21 10:39:14.991310 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:15.016656 containerd[1462]: time="2026-04-21T10:39:15.016598132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-594587f7f6-hcwfc,Uid:9228a264-b95c-4bee-b7ea-267a9554ce11,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502\"" Apr 21 10:39:15.018634 containerd[1462]: time="2026-04-21T10:39:15.018607114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:39:15.983064 systemd-networkd[1378]: caliecb3a4a5152: Gained IPv6LL Apr 21 10:39:16.237454 kubelet[2510]: I0421 10:39:16.237138 2510 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:39:16.237914 kubelet[2510]: E0421 10:39:16.237590 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:16.336814 kubelet[2510]: I0421 10:39:16.336708 2510 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="06d1d10d-8e0d-411e-9e6d-45fe89536c14" path="/var/lib/kubelet/pods/06d1d10d-8e0d-411e-9e6d-45fe89536c14/volumes" Apr 21 10:39:16.468008 kubelet[2510]: E0421 10:39:16.467583 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:16.806797 containerd[1462]: time="2026-04-21T10:39:16.806651523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:16.807645 containerd[1462]: time="2026-04-21T10:39:16.807484526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:39:16.808815 containerd[1462]: time="2026-04-21T10:39:16.808712265Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:16.811364 containerd[1462]: time="2026-04-21T10:39:16.811321666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:16.812058 containerd[1462]: time="2026-04-21T10:39:16.812020884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.793380445s" Apr 21 10:39:16.812058 containerd[1462]: time="2026-04-21T10:39:16.812053764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:39:16.815873 containerd[1462]: time="2026-04-21T10:39:16.815749471Z" level=info msg="CreateContainer within sandbox \"d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:39:16.826498 containerd[1462]: time="2026-04-21T10:39:16.826455050Z" level=info msg="CreateContainer within sandbox \"d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f3333e8e33f78fca54e7ae67ab9e4469e86da8ceba2ea017d81c80c997540351\"" Apr 21 10:39:16.826952 containerd[1462]: time="2026-04-21T10:39:16.826930827Z" level=info msg="StartContainer for \"f3333e8e33f78fca54e7ae67ab9e4469e86da8ceba2ea017d81c80c997540351\"" Apr 21 10:39:16.848466 systemd[1]: run-containerd-runc-k8s.io-f3333e8e33f78fca54e7ae67ab9e4469e86da8ceba2ea017d81c80c997540351-runc.VA2A9m.mount: Deactivated successfully. Apr 21 10:39:16.856994 systemd[1]: Started cri-containerd-f3333e8e33f78fca54e7ae67ab9e4469e86da8ceba2ea017d81c80c997540351.scope - libcontainer container f3333e8e33f78fca54e7ae67ab9e4469e86da8ceba2ea017d81c80c997540351. Apr 21 10:39:16.890894 containerd[1462]: time="2026-04-21T10:39:16.890839197Z" level=info msg="StartContainer for \"f3333e8e33f78fca54e7ae67ab9e4469e86da8ceba2ea017d81c80c997540351\" returns successfully" Apr 21 10:39:16.892197 containerd[1462]: time="2026-04-21T10:39:16.891964683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:39:17.165825 kernel: calico-node[4257]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:39:17.547442 systemd-networkd[1378]: vxlan.calico: Link UP Apr 21 10:39:17.547448 systemd-networkd[1378]: vxlan.calico: Gained carrier Apr 21 10:39:18.874274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496477217.mount: Deactivated successfully. Apr 21 10:39:18.891111 containerd[1462]: time="2026-04-21T10:39:18.891051745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:18.891843 containerd[1462]: time="2026-04-21T10:39:18.891792418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:39:18.892639 containerd[1462]: time="2026-04-21T10:39:18.892601745Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:18.894845 containerd[1462]: time="2026-04-21T10:39:18.894796741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:18.895337 containerd[1462]: time="2026-04-21T10:39:18.895292412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.003297294s" Apr 21 10:39:18.895402 containerd[1462]: time="2026-04-21T10:39:18.895379970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:39:18.900030 containerd[1462]: time="2026-04-21T10:39:18.899980597Z" level=info msg="CreateContainer within sandbox \"d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:39:18.912459 containerd[1462]: time="2026-04-21T10:39:18.912360741Z" level=info msg="CreateContainer within sandbox \"d1e0099b0e623faf4968babc4a0032ec38ed9660b97996a8af836a3f2977b502\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3b3ed70861b7770057990f59bad63e938e86f30f134d7781ffa4364e5b0ff280\"" Apr 21 10:39:18.912927 containerd[1462]: time="2026-04-21T10:39:18.912822684Z" level=info msg="StartContainer for \"3b3ed70861b7770057990f59bad63e938e86f30f134d7781ffa4364e5b0ff280\"" Apr 21 10:39:18.941952 systemd[1]: Started cri-containerd-3b3ed70861b7770057990f59bad63e938e86f30f134d7781ffa4364e5b0ff280.scope - libcontainer container 3b3ed70861b7770057990f59bad63e938e86f30f134d7781ffa4364e5b0ff280. Apr 21 10:39:18.977560 containerd[1462]: time="2026-04-21T10:39:18.977492867Z" level=info msg="StartContainer for \"3b3ed70861b7770057990f59bad63e938e86f30f134d7781ffa4364e5b0ff280\" returns successfully" Apr 21 10:39:19.014196 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:58332.service - OpenSSH per-connection server daemon (10.0.0.1:58332). Apr 21 10:39:19.058198 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 58332 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:19.059504 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:19.063621 systemd-logind[1440]: New session 9 of user core. Apr 21 10:39:19.066971 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:39:19.184893 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Apr 21 10:39:19.194033 sshd[4458]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:19.197043 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:58332.service: Deactivated successfully. Apr 21 10:39:19.198871 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:39:19.199450 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:39:19.200516 systemd-logind[1440]: Removed session 9. Apr 21 10:39:24.210643 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:58340.service - OpenSSH per-connection server daemon (10.0.0.1:58340). Apr 21 10:39:24.243599 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 58340 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:24.245013 sshd[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:24.249663 systemd-logind[1440]: New session 10 of user core. Apr 21 10:39:24.260971 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:39:24.336945 containerd[1462]: time="2026-04-21T10:39:24.336831780Z" level=info msg="StopPodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\"" Apr 21 10:39:24.337423 containerd[1462]: time="2026-04-21T10:39:24.337379943Z" level=info msg="StopPodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\"" Apr 21 10:39:24.337716 containerd[1462]: time="2026-04-21T10:39:24.337626602Z" level=info msg="StopPodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\"" Apr 21 10:39:24.383046 sshd[4496]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:24.386913 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:58340.service: Deactivated successfully. Apr 21 10:39:24.389822 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:39:24.391740 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:39:24.394133 systemd-logind[1440]: Removed session 10. Apr 21 10:39:24.404690 kubelet[2510]: I0421 10:39:24.404346 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-594587f7f6-hcwfc" podStartSLOduration=6.526040023 podStartE2EDuration="10.404324301s" podCreationTimestamp="2026-04-21 10:39:14 +0000 UTC" firstStartedPulling="2026-04-21 10:39:15.01796334 +0000 UTC m=+38.771295504" lastFinishedPulling="2026-04-21 10:39:18.896247614 +0000 UTC m=+42.649579782" observedRunningTime="2026-04-21 10:39:19.493509352 +0000 UTC m=+43.246841521" watchObservedRunningTime="2026-04-21 10:39:24.404324301 +0000 UTC m=+48.157656480" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.404 [INFO][4535] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.406 [INFO][4535] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" iface="eth0" netns="/var/run/netns/cni-8e999dc1-819c-04ca-1aac-d4ffad870f41" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4535] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" iface="eth0" netns="/var/run/netns/cni-8e999dc1-819c-04ca-1aac-d4ffad870f41" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4535] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" iface="eth0" netns="/var/run/netns/cni-8e999dc1-819c-04ca-1aac-d4ffad870f41" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.408 [INFO][4535] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.408 [INFO][4535] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.432 [INFO][4568] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.432 [INFO][4568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.432 [INFO][4568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.437 [WARNING][4568] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.437 [INFO][4568] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.439 [INFO][4568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:24.442837 containerd[1462]: 2026-04-21 10:39:24.441 [INFO][4535] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:24.443343 containerd[1462]: time="2026-04-21T10:39:24.443044133Z" level=info msg="TearDown network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" successfully" Apr 21 10:39:24.443343 containerd[1462]: time="2026-04-21T10:39:24.443070276Z" level=info msg="StopPodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" returns successfully" Apr 21 10:39:24.445549 systemd[1]: run-netns-cni\x2d8e999dc1\x2d819c\x2d04ca\x2d1aac\x2dd4ffad870f41.mount: Deactivated successfully. Apr 21 10:39:24.447725 kubelet[2510]: E0421 10:39:24.447237 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:24.447897 containerd[1462]: time="2026-04-21T10:39:24.447838442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cqqqk,Uid:07a1844c-5aff-41d5-92cc-1b454992a7e4,Namespace:kube-system,Attempt:1,}" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.406 [INFO][4551] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4551] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" iface="eth0" netns="/var/run/netns/cni-0b2dd30b-74f8-eaeb-90fd-f28c4043c43a" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4551] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" iface="eth0" netns="/var/run/netns/cni-0b2dd30b-74f8-eaeb-90fd-f28c4043c43a" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.409 [INFO][4551] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" iface="eth0" netns="/var/run/netns/cni-0b2dd30b-74f8-eaeb-90fd-f28c4043c43a" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.409 [INFO][4551] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.410 [INFO][4551] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.433 [INFO][4571] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.434 [INFO][4571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.439 [INFO][4571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.445 [WARNING][4571] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.446 [INFO][4571] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.448 [INFO][4571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:24.451260 containerd[1462]: 2026-04-21 10:39:24.450 [INFO][4551] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:24.452279 containerd[1462]: time="2026-04-21T10:39:24.451415942Z" level=info msg="TearDown network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" successfully" Apr 21 10:39:24.452279 containerd[1462]: time="2026-04-21T10:39:24.451433975Z" level=info msg="StopPodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" returns successfully" Apr 21 10:39:24.454532 systemd[1]: run-netns-cni\x2d0b2dd30b\x2d74f8\x2deaeb\x2d90fd\x2df28c4043c43a.mount: Deactivated successfully. Apr 21 10:39:24.455026 containerd[1462]: time="2026-04-21T10:39:24.455003070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6688bb788d-6t6qw,Uid:5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4,Namespace:calico-system,Attempt:1,}" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" iface="eth0" netns="/var/run/netns/cni-4e3634e2-faf8-cb44-d98a-d41a3fc4951d" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" iface="eth0" netns="/var/run/netns/cni-4e3634e2-faf8-cb44-d98a-d41a3fc4951d" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" iface="eth0" netns="/var/run/netns/cni-4e3634e2-faf8-cb44-d98a-d41a3fc4951d" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.407 [INFO][4531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.408 [INFO][4531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.434 [INFO][4569] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.434 [INFO][4569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.448 [INFO][4569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.454 [WARNING][4569] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.454 [INFO][4569] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.456 [INFO][4569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:24.459797 containerd[1462]: 2026-04-21 10:39:24.458 [INFO][4531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:24.460096 containerd[1462]: time="2026-04-21T10:39:24.459984398Z" level=info msg="TearDown network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" successfully" Apr 21 10:39:24.460096 containerd[1462]: time="2026-04-21T10:39:24.459999737Z" level=info msg="StopPodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" returns successfully" Apr 21 10:39:24.461434 systemd[1]: run-netns-cni\x2d4e3634e2\x2dfaf8\x2dcb44\x2dd98a\x2dd41a3fc4951d.mount: Deactivated successfully. Apr 21 10:39:24.463962 containerd[1462]: time="2026-04-21T10:39:24.463912979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-92vsq,Uid:b7c9590f-6caa-4345-9c90-9f8d06102e17,Namespace:calico-system,Attempt:1,}" Apr 21 10:39:24.582904 systemd-networkd[1378]: calib2fbb9ba0ae: Link UP Apr 21 10:39:24.583504 systemd-networkd[1378]: calib2fbb9ba0ae: Gained carrier Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.507 [INFO][4592] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--cqqqk-eth0 coredns-7d764666f9- kube-system 07a1844c-5aff-41d5-92cc-1b454992a7e4 1041 0 2026-04-21 10:38:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-cqqqk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2fbb9ba0ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.507 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.543 [INFO][4631] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" HandleID="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.549 [INFO][4631] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" HandleID="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b46a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-cqqqk", "timestamp":"2026-04-21 10:39:24.543262488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00016adc0)} Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.549 [INFO][4631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.549 [INFO][4631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.549 [INFO][4631] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.552 [INFO][4631] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.557 [INFO][4631] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.562 [INFO][4631] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.563 [INFO][4631] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.565 [INFO][4631] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.565 [INFO][4631] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.566 [INFO][4631] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4 Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.571 [INFO][4631] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.576 [INFO][4631] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.576 [INFO][4631] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" host="localhost" Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.576 [INFO][4631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:24.598395 containerd[1462]: 2026-04-21 10:39:24.576 [INFO][4631] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" HandleID="k8s-pod-network.2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.598883 containerd[1462]: 2026-04-21 10:39:24.579 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cqqqk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"07a1844c-5aff-41d5-92cc-1b454992a7e4", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-cqqqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2fbb9ba0ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:24.598883 containerd[1462]: 2026-04-21 10:39:24.579 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.598883 containerd[1462]: 2026-04-21 10:39:24.579 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2fbb9ba0ae ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.598883 containerd[1462]: 2026-04-21 10:39:24.583 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.598883 containerd[1462]: 2026-04-21 10:39:24.583 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cqqqk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"07a1844c-5aff-41d5-92cc-1b454992a7e4", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4", Pod:"coredns-7d764666f9-cqqqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2fbb9ba0ae", MAC:"d2:8e:2a:c4:11:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:24.598883 containerd[1462]: 2026-04-21 10:39:24.594 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4" Namespace="kube-system" Pod="coredns-7d764666f9-cqqqk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:24.619988 containerd[1462]: time="2026-04-21T10:39:24.619670124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:24.621035 containerd[1462]: time="2026-04-21T10:39:24.620921233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:24.621035 containerd[1462]: time="2026-04-21T10:39:24.620962203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:24.621275 containerd[1462]: time="2026-04-21T10:39:24.621044807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:24.641064 systemd[1]: Started cri-containerd-2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4.scope - libcontainer container 2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4. Apr 21 10:39:24.654886 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:24.690992 containerd[1462]: time="2026-04-21T10:39:24.690663987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cqqqk,Uid:07a1844c-5aff-41d5-92cc-1b454992a7e4,Namespace:kube-system,Attempt:1,} returns sandbox id \"2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4\"" Apr 21 10:39:24.690962 systemd-networkd[1378]: calicb4024e080b: Link UP Apr 21 10:39:24.693907 kubelet[2510]: E0421 10:39:24.693011 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:24.693483 systemd-networkd[1378]: calicb4024e080b: Gained carrier Apr 21 10:39:24.707050 containerd[1462]: time="2026-04-21T10:39:24.706921261Z" level=info msg="CreateContainer within sandbox \"2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.522 [INFO][4602] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0 calico-kube-controllers-6688bb788d- calico-system 5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4 1043 0 2026-04-21 10:38:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6688bb788d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6688bb788d-6t6qw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb4024e080b [] [] }} ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.522 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.551 [INFO][4639] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" HandleID="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.557 [INFO][4639] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" HandleID="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037dda0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6688bb788d-6t6qw", "timestamp":"2026-04-21 10:39:24.551156498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036fb80)} Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.557 [INFO][4639] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.576 [INFO][4639] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.576 [INFO][4639] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.655 [INFO][4639] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.660 [INFO][4639] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.665 [INFO][4639] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.670 [INFO][4639] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.672 [INFO][4639] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.672 [INFO][4639] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.674 [INFO][4639] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880 Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.680 [INFO][4639] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.684 [INFO][4639] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.684 [INFO][4639] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" host="localhost" Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.685 [INFO][4639] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:24.718335 containerd[1462]: 2026-04-21 10:39:24.685 [INFO][4639] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" HandleID="k8s-pod-network.1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.719966 containerd[1462]: 2026-04-21 10:39:24.686 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0", GenerateName:"calico-kube-controllers-6688bb788d-", Namespace:"calico-system", SelfLink:"", UID:"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6688bb788d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6688bb788d-6t6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4024e080b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:24.719966 containerd[1462]: 2026-04-21 10:39:24.687 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.719966 containerd[1462]: 2026-04-21 10:39:24.687 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb4024e080b ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.719966 containerd[1462]: 2026-04-21 10:39:24.702 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.719966 containerd[1462]: 2026-04-21 10:39:24.702 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0", GenerateName:"calico-kube-controllers-6688bb788d-", Namespace:"calico-system", SelfLink:"", UID:"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6688bb788d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880", Pod:"calico-kube-controllers-6688bb788d-6t6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4024e080b", MAC:"d2:d4:cc:d7:36:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:24.719966 containerd[1462]: 2026-04-21 10:39:24.713 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880" Namespace="calico-system" Pod="calico-kube-controllers-6688bb788d-6t6qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:24.725797 containerd[1462]: time="2026-04-21T10:39:24.725725903Z" level=info msg="CreateContainer within sandbox \"2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"976f14f05898540f47bda3559fb8e09a8de469355b702119aeb6371963bc1293\"" Apr 21 10:39:24.728444 containerd[1462]: time="2026-04-21T10:39:24.728425164Z" level=info msg="StartContainer for \"976f14f05898540f47bda3559fb8e09a8de469355b702119aeb6371963bc1293\"" Apr 21 10:39:24.747639 containerd[1462]: time="2026-04-21T10:39:24.746270306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:24.747820 containerd[1462]: time="2026-04-21T10:39:24.747800082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:24.747888 containerd[1462]: time="2026-04-21T10:39:24.747877118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:24.748085 containerd[1462]: time="2026-04-21T10:39:24.748069155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:24.766983 systemd[1]: Started cri-containerd-976f14f05898540f47bda3559fb8e09a8de469355b702119aeb6371963bc1293.scope - libcontainer container 976f14f05898540f47bda3559fb8e09a8de469355b702119aeb6371963bc1293. Apr 21 10:39:24.770880 systemd[1]: Started cri-containerd-1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880.scope - libcontainer container 1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880. Apr 21 10:39:24.785135 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:24.797903 systemd-networkd[1378]: cali7681e8dedf9: Link UP Apr 21 10:39:24.798658 systemd-networkd[1378]: cali7681e8dedf9: Gained carrier Apr 21 10:39:24.810229 containerd[1462]: time="2026-04-21T10:39:24.810134615Z" level=info msg="StartContainer for \"976f14f05898540f47bda3559fb8e09a8de469355b702119aeb6371963bc1293\" returns successfully" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.522 [INFO][4614] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0 calico-apiserver-6d96d7bfc7- calico-system b7c9590f-6caa-4345-9c90-9f8d06102e17 1042 0 2026-04-21 10:38:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d96d7bfc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d96d7bfc7-92vsq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7681e8dedf9 [] [] }} ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.522 [INFO][4614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.559 [INFO][4641] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" HandleID="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.567 [INFO][4641] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" HandleID="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6d96d7bfc7-92vsq", "timestamp":"2026-04-21 10:39:24.559605765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00041a840)} Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.567 [INFO][4641] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.684 [INFO][4641] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.684 [INFO][4641] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.757 [INFO][4641] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.764 [INFO][4641] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.770 [INFO][4641] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.773 [INFO][4641] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.776 [INFO][4641] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.776 [INFO][4641] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.778 [INFO][4641] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185 Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.783 [INFO][4641] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.791 [INFO][4641] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.792 [INFO][4641] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" host="localhost" Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.792 [INFO][4641] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:24.815022 containerd[1462]: 2026-04-21 10:39:24.792 [INFO][4641] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" HandleID="k8s-pod-network.580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.815529 containerd[1462]: 2026-04-21 10:39:24.795 [INFO][4614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"b7c9590f-6caa-4345-9c90-9f8d06102e17", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d96d7bfc7-92vsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7681e8dedf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:24.815529 containerd[1462]: 2026-04-21 10:39:24.796 [INFO][4614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.815529 containerd[1462]: 2026-04-21 10:39:24.796 [INFO][4614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7681e8dedf9 ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.815529 containerd[1462]: 2026-04-21 10:39:24.797 [INFO][4614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.815529 containerd[1462]: 2026-04-21 10:39:24.798 [INFO][4614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"b7c9590f-6caa-4345-9c90-9f8d06102e17", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185", Pod:"calico-apiserver-6d96d7bfc7-92vsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7681e8dedf9", MAC:"76:16:de:ca:93:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:24.815529 containerd[1462]: 2026-04-21 10:39:24.809 [INFO][4614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-92vsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:24.840887 containerd[1462]: time="2026-04-21T10:39:24.840643290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6688bb788d-6t6qw,Uid:5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880\"" Apr 21 10:39:24.844514 containerd[1462]: time="2026-04-21T10:39:24.844341347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:39:24.845710 containerd[1462]: time="2026-04-21T10:39:24.845508584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:24.847214 containerd[1462]: time="2026-04-21T10:39:24.846827172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:24.847214 containerd[1462]: time="2026-04-21T10:39:24.846849233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:24.847494 containerd[1462]: time="2026-04-21T10:39:24.847256020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:24.874083 systemd[1]: Started cri-containerd-580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185.scope - libcontainer container 580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185. Apr 21 10:39:24.890595 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:24.919478 containerd[1462]: time="2026-04-21T10:39:24.919447843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-92vsq,Uid:b7c9590f-6caa-4345-9c90-9f8d06102e17,Namespace:calico-system,Attempt:1,} returns sandbox id \"580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185\"" Apr 21 10:39:25.343350 containerd[1462]: time="2026-04-21T10:39:25.343265665Z" level=info msg="StopPodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\"" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.386 [INFO][4898] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.387 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" iface="eth0" netns="/var/run/netns/cni-0ca6a213-3069-3f1d-6ea4-a5daaaeae0a6" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.387 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" iface="eth0" netns="/var/run/netns/cni-0ca6a213-3069-3f1d-6ea4-a5daaaeae0a6" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.388 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" iface="eth0" netns="/var/run/netns/cni-0ca6a213-3069-3f1d-6ea4-a5daaaeae0a6" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.388 [INFO][4898] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.388 [INFO][4898] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.414 [INFO][4906] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.415 [INFO][4906] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.415 [INFO][4906] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.420 [WARNING][4906] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.420 [INFO][4906] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.422 [INFO][4906] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:25.425115 containerd[1462]: 2026-04-21 10:39:25.423 [INFO][4898] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:25.425544 containerd[1462]: time="2026-04-21T10:39:25.425301684Z" level=info msg="TearDown network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" successfully" Apr 21 10:39:25.425544 containerd[1462]: time="2026-04-21T10:39:25.425323301Z" level=info msg="StopPodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" returns successfully" Apr 21 10:39:25.428584 kubelet[2510]: E0421 10:39:25.428495 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:25.429060 containerd[1462]: time="2026-04-21T10:39:25.429026031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mqdmg,Uid:0bb43cc7-550a-4d6f-af40-1ccdb9ab522e,Namespace:kube-system,Attempt:1,}" Apr 21 10:39:25.449709 systemd[1]: run-netns-cni\x2d0ca6a213\x2d3069\x2d3f1d\x2d6ea4\x2da5daaaeae0a6.mount: Deactivated successfully. Apr 21 10:39:25.497838 kubelet[2510]: E0421 10:39:25.496854 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:25.508546 kubelet[2510]: I0421 10:39:25.508102 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cqqqk" podStartSLOduration=41.50809028 podStartE2EDuration="41.50809028s" podCreationTimestamp="2026-04-21 10:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:39:25.506811532 +0000 UTC m=+49.260143701" watchObservedRunningTime="2026-04-21 10:39:25.50809028 +0000 UTC m=+49.261422454" Apr 21 10:39:25.547371 systemd-networkd[1378]: cali2b63b00b6eb: Link UP Apr 21 10:39:25.547573 systemd-networkd[1378]: cali2b63b00b6eb: Gained carrier Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.473 [INFO][4915] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--mqdmg-eth0 coredns-7d764666f9- kube-system 0bb43cc7-550a-4d6f-af40-1ccdb9ab522e 1065 0 2026-04-21 10:38:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-mqdmg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b63b00b6eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.473 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.500 [INFO][4928] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" HandleID="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.507 [INFO][4928] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" HandleID="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-mqdmg", "timestamp":"2026-04-21 10:39:25.500717419 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000412420)} Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.508 [INFO][4928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.508 [INFO][4928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.508 [INFO][4928] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.513 [INFO][4928] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.520 [INFO][4928] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.526 [INFO][4928] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.529 [INFO][4928] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.531 [INFO][4928] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.531 [INFO][4928] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.533 [INFO][4928] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.537 [INFO][4928] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.542 [INFO][4928] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.542 [INFO][4928] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" host="localhost" Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.542 [INFO][4928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:25.558864 containerd[1462]: 2026-04-21 10:39:25.542 [INFO][4928] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" HandleID="k8s-pod-network.03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.559334 containerd[1462]: 2026-04-21 10:39:25.544 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--mqdmg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-mqdmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b63b00b6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:25.559334 containerd[1462]: 2026-04-21 10:39:25.544 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.559334 containerd[1462]: 2026-04-21 10:39:25.544 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b63b00b6eb ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.559334 containerd[1462]: 2026-04-21 10:39:25.546 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.559334 containerd[1462]: 2026-04-21 10:39:25.547 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--mqdmg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f", Pod:"coredns-7d764666f9-mqdmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b63b00b6eb", MAC:"1e:73:2e:f6:3d:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:25.559334 containerd[1462]: 2026-04-21 10:39:25.556 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f" Namespace="kube-system" Pod="coredns-7d764666f9-mqdmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:25.580138 containerd[1462]: time="2026-04-21T10:39:25.579849716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:25.580138 containerd[1462]: time="2026-04-21T10:39:25.579930359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:25.580138 containerd[1462]: time="2026-04-21T10:39:25.579956657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:25.580671 containerd[1462]: time="2026-04-21T10:39:25.580329113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:25.600086 systemd[1]: Started cri-containerd-03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f.scope - libcontainer container 03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f. Apr 21 10:39:25.611813 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:25.637903 containerd[1462]: time="2026-04-21T10:39:25.637833271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mqdmg,Uid:0bb43cc7-550a-4d6f-af40-1ccdb9ab522e,Namespace:kube-system,Attempt:1,} returns sandbox id \"03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f\"" Apr 21 10:39:25.639034 kubelet[2510]: E0421 10:39:25.638964 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:25.644590 containerd[1462]: time="2026-04-21T10:39:25.644539636Z" level=info msg="CreateContainer within sandbox \"03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:39:25.657874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191297975.mount: Deactivated successfully. Apr 21 10:39:25.659621 containerd[1462]: time="2026-04-21T10:39:25.659535124Z" level=info msg="CreateContainer within sandbox \"03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3328607d91ab9def5d7865e0e852f2fbe7d223a4aba65e5d5830310ecbb529bb\"" Apr 21 10:39:25.660178 containerd[1462]: time="2026-04-21T10:39:25.660115798Z" level=info msg="StartContainer for \"3328607d91ab9def5d7865e0e852f2fbe7d223a4aba65e5d5830310ecbb529bb\"" Apr 21 10:39:25.689979 systemd[1]: Started cri-containerd-3328607d91ab9def5d7865e0e852f2fbe7d223a4aba65e5d5830310ecbb529bb.scope - libcontainer container 3328607d91ab9def5d7865e0e852f2fbe7d223a4aba65e5d5830310ecbb529bb. Apr 21 10:39:25.711978 containerd[1462]: time="2026-04-21T10:39:25.711918129Z" level=info msg="StartContainer for \"3328607d91ab9def5d7865e0e852f2fbe7d223a4aba65e5d5830310ecbb529bb\" returns successfully" Apr 21 10:39:25.903159 systemd-networkd[1378]: calicb4024e080b: Gained IPv6LL Apr 21 10:39:25.903514 systemd-networkd[1378]: cali7681e8dedf9: Gained IPv6LL Apr 21 10:39:26.340058 containerd[1462]: time="2026-04-21T10:39:26.339979491Z" level=info msg="StopPodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\"" Apr 21 10:39:26.415333 systemd-networkd[1378]: calib2fbb9ba0ae: Gained IPv6LL Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.382 [INFO][5057] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.382 [INFO][5057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" iface="eth0" netns="/var/run/netns/cni-f4b35ec2-dcbe-ea55-0813-207138a63680" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.383 [INFO][5057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" iface="eth0" netns="/var/run/netns/cni-f4b35ec2-dcbe-ea55-0813-207138a63680" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.383 [INFO][5057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" iface="eth0" netns="/var/run/netns/cni-f4b35ec2-dcbe-ea55-0813-207138a63680" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.383 [INFO][5057] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.383 [INFO][5057] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.406 [INFO][5066] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.406 [INFO][5066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.406 [INFO][5066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.411 [WARNING][5066] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.411 [INFO][5066] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.413 [INFO][5066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:26.418294 containerd[1462]: 2026-04-21 10:39:26.415 [INFO][5057] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:26.419073 containerd[1462]: time="2026-04-21T10:39:26.418454832Z" level=info msg="TearDown network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" successfully" Apr 21 10:39:26.419073 containerd[1462]: time="2026-04-21T10:39:26.418473730Z" level=info msg="StopPodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" returns successfully" Apr 21 10:39:26.422451 containerd[1462]: time="2026-04-21T10:39:26.422349429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-f2zn2,Uid:b7a1fd50-1ed4-4589-8953-65abd596d417,Namespace:calico-system,Attempt:1,}" Apr 21 10:39:26.447260 systemd[1]: run-netns-cni\x2df4b35ec2\x2ddcbe\x2dea55\x2d0813\x2d207138a63680.mount: Deactivated successfully. Apr 21 10:39:26.506418 kubelet[2510]: E0421 10:39:26.506368 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:26.506418 kubelet[2510]: E0421 10:39:26.506429 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:26.518489 kubelet[2510]: I0421 10:39:26.518371 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mqdmg" podStartSLOduration=42.518359047 podStartE2EDuration="42.518359047s" podCreationTimestamp="2026-04-21 10:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:39:26.517946647 +0000 UTC m=+50.271278824" watchObservedRunningTime="2026-04-21 10:39:26.518359047 +0000 UTC m=+50.271691212" Apr 21 10:39:26.535248 systemd-networkd[1378]: calib006519cd36: Link UP Apr 21 10:39:26.535996 systemd-networkd[1378]: calib006519cd36: Gained carrier Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.468 [INFO][5079] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0 goldmane-9f7667bb8- calico-system b7a1fd50-1ed4-4589-8953-65abd596d417 1087 0 2026-04-21 10:38:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-f2zn2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib006519cd36 [] [] }} ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.468 [INFO][5079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.494 [INFO][5094] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" HandleID="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.499 [INFO][5094] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" HandleID="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-f2zn2", "timestamp":"2026-04-21 10:39:26.494461679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000622b00)} Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.500 [INFO][5094] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.500 [INFO][5094] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.500 [INFO][5094] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.502 [INFO][5094] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.507 [INFO][5094] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.514 [INFO][5094] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.518 [INFO][5094] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.521 [INFO][5094] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.521 [INFO][5094] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.523 [INFO][5094] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798 Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.526 [INFO][5094] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.531 [INFO][5094] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.531 [INFO][5094] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" host="localhost" Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.531 [INFO][5094] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:26.546996 containerd[1462]: 2026-04-21 10:39:26.531 [INFO][5094] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" HandleID="k8s-pod-network.61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.548679 containerd[1462]: 2026-04-21 10:39:26.533 [INFO][5079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"b7a1fd50-1ed4-4589-8953-65abd596d417", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-f2zn2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib006519cd36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:26.548679 containerd[1462]: 2026-04-21 10:39:26.533 [INFO][5079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.548679 containerd[1462]: 2026-04-21 10:39:26.533 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib006519cd36 ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.548679 containerd[1462]: 2026-04-21 10:39:26.536 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.548679 containerd[1462]: 2026-04-21 10:39:26.536 [INFO][5079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"b7a1fd50-1ed4-4589-8953-65abd596d417", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798", Pod:"goldmane-9f7667bb8-f2zn2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib006519cd36", MAC:"26:fe:bd:57:11:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:26.548679 containerd[1462]: 2026-04-21 10:39:26.545 [INFO][5079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798" Namespace="calico-system" Pod="goldmane-9f7667bb8-f2zn2" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:26.565125 containerd[1462]: time="2026-04-21T10:39:26.565008300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:26.565125 containerd[1462]: time="2026-04-21T10:39:26.565053767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:26.565125 containerd[1462]: time="2026-04-21T10:39:26.565066143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:26.566465 containerd[1462]: time="2026-04-21T10:39:26.566337328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:26.596171 systemd[1]: Started cri-containerd-61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798.scope - libcontainer container 61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798. Apr 21 10:39:26.607040 systemd-networkd[1378]: cali2b63b00b6eb: Gained IPv6LL Apr 21 10:39:26.613226 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:26.640412 containerd[1462]: time="2026-04-21T10:39:26.640362266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-f2zn2,Uid:b7a1fd50-1ed4-4589-8953-65abd596d417,Namespace:calico-system,Attempt:1,} returns sandbox id \"61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798\"" Apr 21 10:39:27.334692 containerd[1462]: time="2026-04-21T10:39:27.334610882Z" level=info msg="StopPodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\"" Apr 21 10:39:27.334692 containerd[1462]: time="2026-04-21T10:39:27.334638890Z" level=info msg="StopPodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\"" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.384 [INFO][5191] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.385 [INFO][5191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" iface="eth0" netns="/var/run/netns/cni-3c284c73-8c06-676e-af39-91cf79f40cc3" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.385 [INFO][5191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" iface="eth0" netns="/var/run/netns/cni-3c284c73-8c06-676e-af39-91cf79f40cc3" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.386 [INFO][5191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" iface="eth0" netns="/var/run/netns/cni-3c284c73-8c06-676e-af39-91cf79f40cc3" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.386 [INFO][5191] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.386 [INFO][5191] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.410 [INFO][5211] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.411 [INFO][5211] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.411 [INFO][5211] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.420 [WARNING][5211] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.420 [INFO][5211] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.422 [INFO][5211] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:27.426816 containerd[1462]: 2026-04-21 10:39:27.424 [INFO][5191] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:27.427809 containerd[1462]: time="2026-04-21T10:39:27.427730133Z" level=info msg="TearDown network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" successfully" Apr 21 10:39:27.427979 containerd[1462]: time="2026-04-21T10:39:27.427897251Z" level=info msg="StopPodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" returns successfully" Apr 21 10:39:27.430146 systemd[1]: run-netns-cni\x2d3c284c73\x2d8c06\x2d676e\x2daf39\x2d91cf79f40cc3.mount: Deactivated successfully. Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.382 [INFO][5190] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.383 [INFO][5190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" iface="eth0" netns="/var/run/netns/cni-5d44f083-ad3d-bf46-9992-026e548c12f6" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.383 [INFO][5190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" iface="eth0" netns="/var/run/netns/cni-5d44f083-ad3d-bf46-9992-026e548c12f6" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.383 [INFO][5190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" iface="eth0" netns="/var/run/netns/cni-5d44f083-ad3d-bf46-9992-026e548c12f6" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.383 [INFO][5190] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.383 [INFO][5190] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.411 [INFO][5205] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.411 [INFO][5205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.422 [INFO][5205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.429 [WARNING][5205] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.430 [INFO][5205] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.434 [INFO][5205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:27.438382 containerd[1462]: 2026-04-21 10:39:27.436 [INFO][5190] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:27.439232 containerd[1462]: time="2026-04-21T10:39:27.439126054Z" level=info msg="TearDown network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" successfully" Apr 21 10:39:27.439232 containerd[1462]: time="2026-04-21T10:39:27.439223632Z" level=info msg="StopPodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" returns successfully" Apr 21 10:39:27.439305 containerd[1462]: time="2026-04-21T10:39:27.439138109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshx6,Uid:9fdc5648-d90a-492e-8550-ef4cb967e14b,Namespace:calico-system,Attempt:1,}" Apr 21 10:39:27.441023 systemd[1]: run-netns-cni\x2d5d44f083\x2dad3d\x2dbf46\x2d9992\x2d026e548c12f6.mount: Deactivated successfully. Apr 21 10:39:27.443016 containerd[1462]: time="2026-04-21T10:39:27.442976148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-rtbsk,Uid:8e8005a6-289b-49cd-bea4-c17d23bb38fa,Namespace:calico-system,Attempt:1,}" Apr 21 10:39:27.552448 kubelet[2510]: E0421 10:39:27.552398 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:27.554839 kubelet[2510]: E0421 10:39:27.552990 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:27.656150 systemd-networkd[1378]: cali7b42effd92c: Link UP Apr 21 10:39:27.656376 systemd-networkd[1378]: cali7b42effd92c: Gained carrier Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.490 [INFO][5234] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0 calico-apiserver-6d96d7bfc7- calico-system 8e8005a6-289b-49cd-bea4-c17d23bb38fa 1105 0 2026-04-21 10:38:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d96d7bfc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d96d7bfc7-rtbsk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7b42effd92c [] [] }} ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.491 [INFO][5234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.607 [INFO][5253] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" HandleID="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.617 [INFO][5253] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" HandleID="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f0140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6d96d7bfc7-rtbsk", "timestamp":"2026-04-21 10:39:27.607058837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.617 [INFO][5253] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.617 [INFO][5253] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.617 [INFO][5253] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.623 [INFO][5253] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.628 [INFO][5253] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.633 [INFO][5253] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.635 [INFO][5253] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.638 [INFO][5253] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.638 [INFO][5253] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.640 [INFO][5253] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687 Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.644 [INFO][5253] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.651 [INFO][5253] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.651 [INFO][5253] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" host="localhost" Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.651 [INFO][5253] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:27.667658 containerd[1462]: 2026-04-21 10:39:27.651 [INFO][5253] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" HandleID="k8s-pod-network.d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.668320 containerd[1462]: 2026-04-21 10:39:27.653 [INFO][5234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"8e8005a6-289b-49cd-bea4-c17d23bb38fa", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d96d7bfc7-rtbsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7b42effd92c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:27.668320 containerd[1462]: 2026-04-21 10:39:27.654 [INFO][5234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.668320 containerd[1462]: 2026-04-21 10:39:27.654 [INFO][5234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b42effd92c ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.668320 containerd[1462]: 2026-04-21 10:39:27.656 [INFO][5234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.668320 containerd[1462]: 2026-04-21 10:39:27.656 [INFO][5234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"8e8005a6-289b-49cd-bea4-c17d23bb38fa", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687", Pod:"calico-apiserver-6d96d7bfc7-rtbsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7b42effd92c", MAC:"7a:f0:d7:ad:72:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:27.668320 containerd[1462]: 2026-04-21 10:39:27.665 [INFO][5234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687" Namespace="calico-system" Pod="calico-apiserver-6d96d7bfc7-rtbsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:27.687366 containerd[1462]: time="2026-04-21T10:39:27.685965257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:27.687366 containerd[1462]: time="2026-04-21T10:39:27.686002613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:27.687366 containerd[1462]: time="2026-04-21T10:39:27.686020761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:27.687366 containerd[1462]: time="2026-04-21T10:39:27.686084376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:27.708232 systemd[1]: Started cri-containerd-d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687.scope - libcontainer container d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687. Apr 21 10:39:27.718139 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:27.744647 containerd[1462]: time="2026-04-21T10:39:27.744601342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d96d7bfc7-rtbsk,Uid:8e8005a6-289b-49cd-bea4-c17d23bb38fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687\"" Apr 21 10:39:27.764398 systemd-networkd[1378]: cali190265231c8: Link UP Apr 21 10:39:27.764495 systemd-networkd[1378]: cali190265231c8: Gained carrier Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.489 [INFO][5223] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kshx6-eth0 csi-node-driver- calico-system 9fdc5648-d90a-492e-8550-ef4cb967e14b 1106 0 2026-04-21 10:38:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kshx6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali190265231c8 [] [] }} ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.489 [INFO][5223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.614 [INFO][5251] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" HandleID="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.623 [INFO][5251] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" HandleID="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000592350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kshx6", "timestamp":"2026-04-21 10:39:27.614346607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001122c0)} Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.623 [INFO][5251] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.651 [INFO][5251] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.651 [INFO][5251] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.725 [INFO][5251] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.731 [INFO][5251] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.737 [INFO][5251] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.740 [INFO][5251] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.744 [INFO][5251] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.744 [INFO][5251] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.746 [INFO][5251] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.751 [INFO][5251] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.759 [INFO][5251] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.759 [INFO][5251] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" host="localhost" Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.759 [INFO][5251] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:27.787415 containerd[1462]: 2026-04-21 10:39:27.759 [INFO][5251] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" HandleID="k8s-pod-network.b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.788059 containerd[1462]: 2026-04-21 10:39:27.761 [INFO][5223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kshx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fdc5648-d90a-492e-8550-ef4cb967e14b", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kshx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190265231c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:27.788059 containerd[1462]: 2026-04-21 10:39:27.761 [INFO][5223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.788059 containerd[1462]: 2026-04-21 10:39:27.761 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali190265231c8 ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.788059 containerd[1462]: 2026-04-21 10:39:27.765 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.788059 containerd[1462]: 2026-04-21 10:39:27.768 [INFO][5223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kshx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fdc5648-d90a-492e-8550-ef4cb967e14b", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a", Pod:"csi-node-driver-kshx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190265231c8", MAC:"4e:56:b8:2c:70:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:27.788059 containerd[1462]: 2026-04-21 10:39:27.785 [INFO][5223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a" Namespace="calico-system" Pod="csi-node-driver-kshx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:27.850990 containerd[1462]: time="2026-04-21T10:39:27.850741045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:39:27.851670 containerd[1462]: time="2026-04-21T10:39:27.851502496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:39:27.853155 containerd[1462]: time="2026-04-21T10:39:27.853099135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:27.853279 containerd[1462]: time="2026-04-21T10:39:27.853192289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:39:27.875933 systemd[1]: Started cri-containerd-b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a.scope - libcontainer container b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a. Apr 21 10:39:27.887335 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:39:27.899186 containerd[1462]: time="2026-04-21T10:39:27.899146215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshx6,Uid:9fdc5648-d90a-492e-8550-ef4cb967e14b,Namespace:calico-system,Attempt:1,} returns sandbox id \"b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a\"" Apr 21 10:39:28.335050 systemd-networkd[1378]: calib006519cd36: Gained IPv6LL Apr 21 10:39:28.381587 containerd[1462]: time="2026-04-21T10:39:28.381515581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:28.382170 containerd[1462]: time="2026-04-21T10:39:28.382109271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:39:28.383027 containerd[1462]: time="2026-04-21T10:39:28.382963295Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:28.385061 containerd[1462]: time="2026-04-21T10:39:28.385011478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:28.385447 containerd[1462]: time="2026-04-21T10:39:28.385419187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.540964725s" Apr 21 10:39:28.385476 containerd[1462]: time="2026-04-21T10:39:28.385451328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:39:28.386566 containerd[1462]: time="2026-04-21T10:39:28.386429638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:39:28.394273 containerd[1462]: time="2026-04-21T10:39:28.394231487Z" level=info msg="CreateContainer within sandbox \"1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:39:28.405288 containerd[1462]: time="2026-04-21T10:39:28.405179206Z" level=info msg="CreateContainer within sandbox \"1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9687a76ea2256b2fe7a47ba87e7464983cb0e46a8f481153f7eeb18bcd7bc492\"" Apr 21 10:39:28.405820 containerd[1462]: time="2026-04-21T10:39:28.405793284Z" level=info msg="StartContainer for \"9687a76ea2256b2fe7a47ba87e7464983cb0e46a8f481153f7eeb18bcd7bc492\"" Apr 21 10:39:28.436986 systemd[1]: Started cri-containerd-9687a76ea2256b2fe7a47ba87e7464983cb0e46a8f481153f7eeb18bcd7bc492.scope - libcontainer container 9687a76ea2256b2fe7a47ba87e7464983cb0e46a8f481153f7eeb18bcd7bc492. Apr 21 10:39:28.474223 containerd[1462]: time="2026-04-21T10:39:28.474160095Z" level=info msg="StartContainer for \"9687a76ea2256b2fe7a47ba87e7464983cb0e46a8f481153f7eeb18bcd7bc492\" returns successfully" Apr 21 10:39:28.561652 kubelet[2510]: E0421 10:39:28.561601 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:28.568248 kubelet[2510]: I0421 10:39:28.567259 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6688bb788d-6t6qw" podStartSLOduration=33.02350157 podStartE2EDuration="36.567248586s" podCreationTimestamp="2026-04-21 10:38:52 +0000 UTC" firstStartedPulling="2026-04-21 10:39:24.842542796 +0000 UTC m=+48.595874961" lastFinishedPulling="2026-04-21 10:39:28.386289809 +0000 UTC m=+52.139621977" observedRunningTime="2026-04-21 10:39:28.566877963 +0000 UTC m=+52.320210138" watchObservedRunningTime="2026-04-21 10:39:28.567248586 +0000 UTC m=+52.320580750" Apr 21 10:39:28.575355 systemd[1]: run-containerd-runc-k8s.io-9687a76ea2256b2fe7a47ba87e7464983cb0e46a8f481153f7eeb18bcd7bc492-runc.g8lQ0S.mount: Deactivated successfully. Apr 21 10:39:29.103130 systemd-networkd[1378]: cali7b42effd92c: Gained IPv6LL Apr 21 10:39:29.395683 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:34432.service - OpenSSH per-connection server daemon (10.0.0.1:34432). Apr 21 10:39:29.422950 systemd-networkd[1378]: cali190265231c8: Gained IPv6LL Apr 21 10:39:29.437926 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 34432 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:29.439316 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:29.442934 systemd-logind[1440]: New session 11 of user core. Apr 21 10:39:29.446927 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:39:29.564306 kubelet[2510]: E0421 10:39:29.563988 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:29.584035 sshd[5478]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:29.590872 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:34432.service: Deactivated successfully. Apr 21 10:39:29.592269 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:39:29.593336 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:39:29.594344 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:34434.service - OpenSSH per-connection server daemon (10.0.0.1:34434). Apr 21 10:39:29.595058 systemd-logind[1440]: Removed session 11. Apr 21 10:39:29.623958 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 34434 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:29.625109 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:29.628822 systemd-logind[1440]: New session 12 of user core. Apr 21 10:39:29.636958 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:39:29.791911 sshd[5497]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:29.800236 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:34434.service: Deactivated successfully. Apr 21 10:39:29.804347 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:39:29.810014 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:39:29.822175 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:34440.service - OpenSSH per-connection server daemon (10.0.0.1:34440). Apr 21 10:39:29.823674 systemd-logind[1440]: Removed session 12. Apr 21 10:39:29.849018 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 34440 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:29.850139 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:29.853831 systemd-logind[1440]: New session 13 of user core. Apr 21 10:39:29.859997 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:39:29.984194 sshd[5515]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:29.989345 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:34440.service: Deactivated successfully. Apr 21 10:39:29.991126 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:39:29.991802 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:39:29.992734 systemd-logind[1440]: Removed session 13. Apr 21 10:39:31.526899 containerd[1462]: time="2026-04-21T10:39:31.526837183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:31.527598 containerd[1462]: time="2026-04-21T10:39:31.527528995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:39:31.528409 containerd[1462]: time="2026-04-21T10:39:31.528379476Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:31.530462 containerd[1462]: time="2026-04-21T10:39:31.530436889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:39:31.530909 containerd[1462]: time="2026-04-21T10:39:31.530883051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.144433646s" Apr 21 10:39:31.530909 containerd[1462]: time="2026-04-21T10:39:31.530908454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:39:31.531804 containerd[1462]: time="2026-04-21T10:39:31.531782814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:39:31.535207 containerd[1462]: time="2026-04-21T10:39:31.535114174Z" level=info msg="CreateContainer within sandbox \"580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:39:31.560967 containerd[1462]: time="2026-04-21T10:39:31.560916620Z" level=info msg="CreateContainer within sandbox \"580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6da1d21c6d845b6e4899604b47542698cce90ca01fa1e7f55b93757c8c0e4a00\"" Apr 21 10:39:31.561521 containerd[1462]: time="2026-04-21T10:39:31.561469697Z" level=info msg="StartContainer for \"6da1d21c6d845b6e4899604b47542698cce90ca01fa1e7f55b93757c8c0e4a00\"" Apr 21 10:39:31.619250 systemd[1]: run-containerd-runc-k8s.io-6da1d21c6d845b6e4899604b47542698cce90ca01fa1e7f55b93757c8c0e4a00-runc.1AuOGU.mount: Deactivated successfully. Apr 21 10:39:31.631063 systemd[1]: Started cri-containerd-6da1d21c6d845b6e4899604b47542698cce90ca01fa1e7f55b93757c8c0e4a00.scope - libcontainer container 6da1d21c6d845b6e4899604b47542698cce90ca01fa1e7f55b93757c8c0e4a00. Apr 21 10:39:31.669643 containerd[1462]: time="2026-04-21T10:39:31.669574510Z" level=info msg="StartContainer for \"6da1d21c6d845b6e4899604b47542698cce90ca01fa1e7f55b93757c8c0e4a00\" returns successfully" Apr 21 10:39:33.575514 kubelet[2510]: I0421 10:39:33.575449 2510 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:39:34.287189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246604993.mount: Deactivated successfully. Apr 21 10:39:34.999477 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:34454.service - OpenSSH per-connection server daemon (10.0.0.1:34454). Apr 21 10:39:35.042050 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 34454 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:35.043253 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:35.047538 systemd-logind[1440]: New session 14 of user core. Apr 21 10:39:35.060068 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:39:35.278722 sshd[5603]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:35.289051 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:34454.service: Deactivated successfully. Apr 21 10:39:35.290702 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:39:35.291713 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:39:35.292715 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:42716.service - OpenSSH per-connection server daemon (10.0.0.1:42716). Apr 21 10:39:35.293425 systemd-logind[1440]: Removed session 14. Apr 21 10:39:35.323049 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 42716 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:35.324093 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:35.327728 systemd-logind[1440]: New session 15 of user core. Apr 21 10:39:35.333943 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:39:35.622088 sshd[5620]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:35.631331 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:42716.service: Deactivated successfully. Apr 21 10:39:35.632939 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:39:35.634266 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:39:35.635718 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:42726.service - OpenSSH per-connection server daemon (10.0.0.1:42726). Apr 21 10:39:35.636582 systemd-logind[1440]: Removed session 15. Apr 21 10:39:35.676498 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 42726 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:35.677901 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:35.682203 systemd-logind[1440]: New session 16 of user core. Apr 21 10:39:35.695142 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:39:36.311199 sshd[5633]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:36.320056 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:42726.service: Deactivated successfully. Apr 21 10:39:36.324912 containerd[1462]: time="2026-04-21T10:39:36.322480984Z" level=info msg="StopPodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\"" Apr 21 10:39:36.324050 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:39:36.329041 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:39:36.338390 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:42728.service - OpenSSH per-connection server daemon (10.0.0.1:42728). Apr 21 10:39:36.340109 systemd-logind[1440]: Removed session 16. Apr 21 10:39:36.364085 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 42728 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:36.365223 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:36.369348 systemd-logind[1440]: New session 17 of user core. Apr 21 10:39:36.375224 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.384 [WARNING][5673] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"b7a1fd50-1ed4-4589-8953-65abd596d417", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798", Pod:"goldmane-9f7667bb8-f2zn2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib006519cd36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.385 [INFO][5673] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.385 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" iface="eth0" netns="" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.385 [INFO][5673] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.385 [INFO][5673] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.415 [INFO][5683] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.415 [INFO][5683] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.415 [INFO][5683] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.421 [WARNING][5683] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.422 [INFO][5683] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.423 [INFO][5683] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.427385 containerd[1462]: 2026-04-21 10:39:36.425 [INFO][5673] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.431074 containerd[1462]: time="2026-04-21T10:39:36.431022272Z" level=info msg="TearDown network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" successfully" Apr 21 10:39:36.431074 containerd[1462]: time="2026-04-21T10:39:36.431067700Z" level=info msg="StopPodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" returns successfully" Apr 21 10:39:36.465139 containerd[1462]: time="2026-04-21T10:39:36.464994998Z" level=info msg="RemovePodSandbox for \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\"" Apr 21 10:39:36.469310 containerd[1462]: time="2026-04-21T10:39:36.469263315Z" level=info msg="Forcibly stopping sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\"" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.503 [WARNING][5706] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"b7a1fd50-1ed4-4589-8953-65abd596d417", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61fd98e528d2360f9760ae2eda50da9459f2f4fa8f9a666cc8d9fc0beb0e8798", Pod:"goldmane-9f7667bb8-f2zn2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib006519cd36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.503 [INFO][5706] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.503 [INFO][5706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" iface="eth0" netns="" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.503 [INFO][5706] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.503 [INFO][5706] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.528 [INFO][5715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.529 [INFO][5715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.529 [INFO][5715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.535 [WARNING][5715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.535 [INFO][5715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" HandleID="k8s-pod-network.93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Workload="localhost-k8s-goldmane--9f7667bb8--f2zn2-eth0" Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.536 [INFO][5715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.540441 containerd[1462]: 2026-04-21 10:39:36.538 [INFO][5706] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5" Apr 21 10:39:36.541080 containerd[1462]: time="2026-04-21T10:39:36.540471638Z" level=info msg="TearDown network for sandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" successfully" Apr 21 10:39:36.562375 containerd[1462]: time="2026-04-21T10:39:36.562255600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:36.562375 containerd[1462]: time="2026-04-21T10:39:36.562335101Z" level=info msg="RemovePodSandbox \"93e2c3381b15cc8b79dde215dfd92e83298c4f86ee51beda510a8eea92175ab5\" returns successfully" Apr 21 10:39:36.568329 containerd[1462]: time="2026-04-21T10:39:36.568273009Z" level=info msg="StopPodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\"" Apr 21 10:39:36.651853 sshd[5660]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.610 [WARNING][5732] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kshx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fdc5648-d90a-492e-8550-ef4cb967e14b", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a", Pod:"csi-node-driver-kshx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190265231c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.611 [INFO][5732] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.611 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" iface="eth0" netns="" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.611 [INFO][5732] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.611 [INFO][5732] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.634 [INFO][5741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.634 [INFO][5741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.634 [INFO][5741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.652 [WARNING][5741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.652 [INFO][5741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.654 [INFO][5741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.658193 containerd[1462]: 2026-04-21 10:39:36.656 [INFO][5732] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.658193 containerd[1462]: time="2026-04-21T10:39:36.658035540Z" level=info msg="TearDown network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" successfully" Apr 21 10:39:36.658193 containerd[1462]: time="2026-04-21T10:39:36.658055003Z" level=info msg="StopPodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" returns successfully" Apr 21 10:39:36.659158 containerd[1462]: time="2026-04-21T10:39:36.658735271Z" level=info msg="RemovePodSandbox for \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\"" Apr 21 10:39:36.659128 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:42728.service: Deactivated successfully. Apr 21 10:39:36.659544 containerd[1462]: time="2026-04-21T10:39:36.659508659Z" level=info msg="Forcibly stopping sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\"" Apr 21 10:39:36.660589 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:39:36.662010 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:39:36.669332 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:42742.service - OpenSSH per-connection server daemon (10.0.0.1:42742). Apr 21 10:39:36.671044 systemd-logind[1440]: Removed session 17. Apr 21 10:39:36.718723 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 42742 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:36.720581 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:36.725064 systemd-logind[1440]: New session 18 of user core. Apr 21 10:39:36.737085 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.699 [WARNING][5762] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kshx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fdc5648-d90a-492e-8550-ef4cb967e14b", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b416eae979c49b4195c263f79cdcfc11457b3cadb901c781efddc23ddc0b255a", Pod:"csi-node-driver-kshx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190265231c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.700 [INFO][5762] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.700 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" iface="eth0" netns="" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.700 [INFO][5762] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.700 [INFO][5762] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.723 [INFO][5772] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.723 [INFO][5772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.723 [INFO][5772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.731 [WARNING][5772] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.731 [INFO][5772] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" HandleID="k8s-pod-network.7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Workload="localhost-k8s-csi--node--driver--kshx6-eth0" Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.734 [INFO][5772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.738323 containerd[1462]: 2026-04-21 10:39:36.736 [INFO][5762] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb" Apr 21 10:39:36.738855 containerd[1462]: time="2026-04-21T10:39:36.738832451Z" level=info msg="TearDown network for sandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" successfully" Apr 21 10:39:36.742532 containerd[1462]: time="2026-04-21T10:39:36.742453940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:36.742532 containerd[1462]: time="2026-04-21T10:39:36.742528814Z" level=info msg="RemovePodSandbox \"7c649abe80e7d47fd5e2ea0e3fd68f52011cb16a9ef71839d80aa93dc321f9fb\" returns successfully" Apr 21 10:39:36.743133 containerd[1462]: time="2026-04-21T10:39:36.743104107Z" level=info msg="StopPodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\"" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.781 [WARNING][5791] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cqqqk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"07a1844c-5aff-41d5-92cc-1b454992a7e4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4", Pod:"coredns-7d764666f9-cqqqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2fbb9ba0ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.781 [INFO][5791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.781 [INFO][5791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" iface="eth0" netns="" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.781 [INFO][5791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.781 [INFO][5791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.802 [INFO][5800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.802 [INFO][5800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.802 [INFO][5800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.808 [WARNING][5800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.808 [INFO][5800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.810 [INFO][5800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.814961 containerd[1462]: 2026-04-21 10:39:36.812 [INFO][5791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.814961 containerd[1462]: time="2026-04-21T10:39:36.814888364Z" level=info msg="TearDown network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" successfully" Apr 21 10:39:36.814961 containerd[1462]: time="2026-04-21T10:39:36.814937810Z" level=info msg="StopPodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" returns successfully" Apr 21 10:39:36.816814 containerd[1462]: time="2026-04-21T10:39:36.815919060Z" level=info msg="RemovePodSandbox for \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\"" Apr 21 10:39:36.816814 containerd[1462]: time="2026-04-21T10:39:36.815955091Z" level=info msg="Forcibly stopping sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\"" Apr 21 10:39:36.877602 sshd[5761]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:36.882409 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:42742.service: Deactivated successfully. Apr 21 10:39:36.883923 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:39:36.884958 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:39:36.886017 systemd-logind[1440]: Removed session 18. Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.855 [WARNING][5825] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cqqqk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"07a1844c-5aff-41d5-92cc-1b454992a7e4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bac938476e6b7788fab42c38ef484773dddecd06bdeb69a9b40eccabd71b6f4", Pod:"coredns-7d764666f9-cqqqk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2fbb9ba0ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.856 [INFO][5825] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.856 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" iface="eth0" netns="" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.856 [INFO][5825] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.856 [INFO][5825] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.878 [INFO][5833] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.879 [INFO][5833] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.879 [INFO][5833] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.884 [WARNING][5833] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.884 [INFO][5833] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" HandleID="k8s-pod-network.b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Workload="localhost-k8s-coredns--7d764666f9--cqqqk-eth0" Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.885 [INFO][5833] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.888731 containerd[1462]: 2026-04-21 10:39:36.887 [INFO][5825] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a" Apr 21 10:39:36.889078 containerd[1462]: time="2026-04-21T10:39:36.888798003Z" level=info msg="TearDown network for sandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" successfully" Apr 21 10:39:36.897154 containerd[1462]: time="2026-04-21T10:39:36.897053351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:36.897154 containerd[1462]: time="2026-04-21T10:39:36.897126041Z" level=info msg="RemovePodSandbox \"b61a26eb1540285fad0603353eb3a13317b7410d647f8303e3798eb178104d6a\" returns successfully" Apr 21 10:39:36.897642 containerd[1462]: time="2026-04-21T10:39:36.897615393Z" level=info msg="StopPodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\"" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.927 [WARNING][5853] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0", GenerateName:"calico-kube-controllers-6688bb788d-", Namespace:"calico-system", SelfLink:"", UID:"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6688bb788d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880", Pod:"calico-kube-controllers-6688bb788d-6t6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4024e080b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.927 [INFO][5853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.927 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" iface="eth0" netns="" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.927 [INFO][5853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.927 [INFO][5853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.946 [INFO][5861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.946 [INFO][5861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.946 [INFO][5861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.952 [WARNING][5861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.952 [INFO][5861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.954 [INFO][5861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:36.957819 containerd[1462]: 2026-04-21 10:39:36.956 [INFO][5853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:36.958504 containerd[1462]: time="2026-04-21T10:39:36.957847514Z" level=info msg="TearDown network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" successfully" Apr 21 10:39:36.958504 containerd[1462]: time="2026-04-21T10:39:36.957875135Z" level=info msg="StopPodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" returns successfully" Apr 21 10:39:36.958795 containerd[1462]: time="2026-04-21T10:39:36.958719702Z" level=info msg="RemovePodSandbox for \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\"" Apr 21 10:39:36.958819 containerd[1462]: time="2026-04-21T10:39:36.958796343Z" level=info msg="Forcibly stopping sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\"" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:36.994 [WARNING][5879] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0", GenerateName:"calico-kube-controllers-6688bb788d-", Namespace:"calico-system", SelfLink:"", UID:"5f329c28-cbdf-4ba9-ae13-ffbe6877b9d4", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6688bb788d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1930df7dcfaa35bd6c7c1588f0f33b2e9fc8ac32f958978ac5f9abe2a5725880", Pod:"calico-kube-controllers-6688bb788d-6t6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb4024e080b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:36.994 [INFO][5879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:36.994 [INFO][5879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" iface="eth0" netns="" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:36.994 [INFO][5879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:36.994 [INFO][5879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.015 [INFO][5887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.015 [INFO][5887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.015 [INFO][5887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.020 [WARNING][5887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.020 [INFO][5887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" HandleID="k8s-pod-network.713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Workload="localhost-k8s-calico--kube--controllers--6688bb788d--6t6qw-eth0" Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.022 [INFO][5887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.025410 containerd[1462]: 2026-04-21 10:39:37.023 [INFO][5879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2" Apr 21 10:39:37.025410 containerd[1462]: time="2026-04-21T10:39:37.025409580Z" level=info msg="TearDown network for sandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" successfully" Apr 21 10:39:37.028691 containerd[1462]: time="2026-04-21T10:39:37.028667642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:37.028790 containerd[1462]: time="2026-04-21T10:39:37.028716975Z" level=info msg="RemovePodSandbox \"713680b1167e8410214953d351f5cdac5405401317c31d88bd5a96d732689af2\" returns successfully" Apr 21 10:39:37.029355 containerd[1462]: time="2026-04-21T10:39:37.029329601Z" level=info msg="StopPodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\"" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.063 [WARNING][5905] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--mqdmg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f", Pod:"coredns-7d764666f9-mqdmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b63b00b6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.064 [INFO][5905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.064 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" iface="eth0" netns="" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.064 [INFO][5905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.064 [INFO][5905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.083 [INFO][5915] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.083 [INFO][5915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.083 [INFO][5915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.088 [WARNING][5915] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.089 [INFO][5915] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.090 [INFO][5915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.094577 containerd[1462]: 2026-04-21 10:39:37.092 [INFO][5905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.094577 containerd[1462]: time="2026-04-21T10:39:37.094546752Z" level=info msg="TearDown network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" successfully" Apr 21 10:39:37.095004 containerd[1462]: time="2026-04-21T10:39:37.094583715Z" level=info msg="StopPodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" returns successfully" Apr 21 10:39:37.095716 containerd[1462]: time="2026-04-21T10:39:37.095682056Z" level=info msg="RemovePodSandbox for \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\"" Apr 21 10:39:37.095795 containerd[1462]: time="2026-04-21T10:39:37.095724609Z" level=info msg="Forcibly stopping sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\"" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.126 [WARNING][5934] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--mqdmg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0bb43cc7-550a-4d6f-af40-1ccdb9ab522e", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03977a7a0f4266b6b5a1c351e2992db3eeb74ae6f1fbcbd290dfa0db6eb1131f", Pod:"coredns-7d764666f9-mqdmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b63b00b6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.126 [INFO][5934] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.127 [INFO][5934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" iface="eth0" netns="" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.127 [INFO][5934] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.127 [INFO][5934] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.146 [INFO][5942] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.146 [INFO][5942] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.147 [INFO][5942] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.152 [WARNING][5942] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.152 [INFO][5942] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" HandleID="k8s-pod-network.32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Workload="localhost-k8s-coredns--7d764666f9--mqdmg-eth0" Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.154 [INFO][5942] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.157850 containerd[1462]: 2026-04-21 10:39:37.156 [INFO][5934] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8" Apr 21 10:39:37.158312 containerd[1462]: time="2026-04-21T10:39:37.157884617Z" level=info msg="TearDown network for sandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" successfully" Apr 21 10:39:37.161158 containerd[1462]: time="2026-04-21T10:39:37.161088847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:37.161158 containerd[1462]: time="2026-04-21T10:39:37.161156537Z" level=info msg="RemovePodSandbox \"32494d5f7b8ae4ff16ac333c3793d5be076abccda9ebcff2b85ab785fb7544f8\" returns successfully" Apr 21 10:39:37.161744 containerd[1462]: time="2026-04-21T10:39:37.161715083Z" level=info msg="StopPodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\"" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.193 [WARNING][5959] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" WorkloadEndpoint="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.193 [INFO][5959] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.193 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" iface="eth0" netns="" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.193 [INFO][5959] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.193 [INFO][5959] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.211 [INFO][5967] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.211 [INFO][5967] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.211 [INFO][5967] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.217 [WARNING][5967] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.217 [INFO][5967] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.219 [INFO][5967] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.221909 containerd[1462]: 2026-04-21 10:39:37.220 [INFO][5959] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.222225 containerd[1462]: time="2026-04-21T10:39:37.221917083Z" level=info msg="TearDown network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" successfully" Apr 21 10:39:37.222225 containerd[1462]: time="2026-04-21T10:39:37.221938075Z" level=info msg="StopPodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" returns successfully" Apr 21 10:39:37.222464 containerd[1462]: time="2026-04-21T10:39:37.222434774Z" level=info msg="RemovePodSandbox for \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\"" Apr 21 10:39:37.222543 containerd[1462]: time="2026-04-21T10:39:37.222468929Z" level=info msg="Forcibly stopping sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\"" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.258 [WARNING][5985] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" WorkloadEndpoint="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.258 [INFO][5985] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.258 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" iface="eth0" netns="" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.258 [INFO][5985] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.258 [INFO][5985] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.280 [INFO][5994] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.280 [INFO][5994] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.280 [INFO][5994] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.288 [WARNING][5994] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.289 [INFO][5994] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" HandleID="k8s-pod-network.74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Workload="localhost-k8s-whisker--fb5f4844c--k6m8p-eth0" Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.291 [INFO][5994] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.294386 containerd[1462]: 2026-04-21 10:39:37.292 [INFO][5985] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589" Apr 21 10:39:37.295026 containerd[1462]: time="2026-04-21T10:39:37.294412355Z" level=info msg="TearDown network for sandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" successfully" Apr 21 10:39:37.297573 containerd[1462]: time="2026-04-21T10:39:37.297518423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:37.297573 containerd[1462]: time="2026-04-21T10:39:37.297593941Z" level=info msg="RemovePodSandbox \"74e5e077fb82b0e271ac2783e55204104f235b825b2ffa1d196d119004acd589\" returns successfully" Apr 21 10:39:37.298214 containerd[1462]: time="2026-04-21T10:39:37.298147854Z" level=info msg="StopPodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\"" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.334 [WARNING][6012] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"8e8005a6-289b-49cd-bea4-c17d23bb38fa", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687", Pod:"calico-apiserver-6d96d7bfc7-rtbsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7b42effd92c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.335 [INFO][6012] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.335 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" iface="eth0" netns="" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.335 [INFO][6012] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.335 [INFO][6012] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.356 [INFO][6022] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.357 [INFO][6022] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.357 [INFO][6022] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.363 [WARNING][6022] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.363 [INFO][6022] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.366 [INFO][6022] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.369859 containerd[1462]: 2026-04-21 10:39:37.367 [INFO][6012] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.369859 containerd[1462]: time="2026-04-21T10:39:37.369797148Z" level=info msg="TearDown network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" successfully" Apr 21 10:39:37.369859 containerd[1462]: time="2026-04-21T10:39:37.369828163Z" level=info msg="StopPodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" returns successfully" Apr 21 10:39:37.370875 containerd[1462]: time="2026-04-21T10:39:37.370437378Z" level=info msg="RemovePodSandbox for \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\"" Apr 21 10:39:37.370875 containerd[1462]: time="2026-04-21T10:39:37.370461370Z" level=info msg="Forcibly stopping sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\"" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.407 [WARNING][6039] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"8e8005a6-289b-49cd-bea4-c17d23bb38fa", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4dc1f2972887b7ddf63e26c1898da0f48e604dfc921bb578e1c1aaefca47687", Pod:"calico-apiserver-6d96d7bfc7-rtbsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7b42effd92c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.408 [INFO][6039] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.408 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" iface="eth0" netns="" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.408 [INFO][6039] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.408 [INFO][6039] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.429 [INFO][6048] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.430 [INFO][6048] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.430 [INFO][6048] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.436 [WARNING][6048] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.436 [INFO][6048] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" HandleID="k8s-pod-network.03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--rtbsk-eth0" Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.438 [INFO][6048] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.441500 containerd[1462]: 2026-04-21 10:39:37.439 [INFO][6039] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6" Apr 21 10:39:37.441963 containerd[1462]: time="2026-04-21T10:39:37.441541990Z" level=info msg="TearDown network for sandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" successfully" Apr 21 10:39:37.444737 containerd[1462]: time="2026-04-21T10:39:37.444680528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:37.444737 containerd[1462]: time="2026-04-21T10:39:37.444738292Z" level=info msg="RemovePodSandbox \"03332fbdbb35c71a57cb0e5bbffe08c493895d8359a0b0eb53b87fab6616bfa6\" returns successfully" Apr 21 10:39:37.445294 containerd[1462]: time="2026-04-21T10:39:37.445266445Z" level=info msg="StopPodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\"" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.478 [WARNING][6065] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"b7c9590f-6caa-4345-9c90-9f8d06102e17", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185", Pod:"calico-apiserver-6d96d7bfc7-92vsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7681e8dedf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.478 [INFO][6065] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.478 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" iface="eth0" netns="" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.478 [INFO][6065] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.478 [INFO][6065] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.501 [INFO][6074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.501 [INFO][6074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.501 [INFO][6074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.507 [WARNING][6074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.507 [INFO][6074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.508 [INFO][6074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.511645 containerd[1462]: 2026-04-21 10:39:37.510 [INFO][6065] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.512115 containerd[1462]: time="2026-04-21T10:39:37.511663860Z" level=info msg="TearDown network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" successfully" Apr 21 10:39:37.512115 containerd[1462]: time="2026-04-21T10:39:37.511687012Z" level=info msg="StopPodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" returns successfully" Apr 21 10:39:37.512187 containerd[1462]: time="2026-04-21T10:39:37.512155190Z" level=info msg="RemovePodSandbox for \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\"" Apr 21 10:39:37.512310 containerd[1462]: time="2026-04-21T10:39:37.512193613Z" level=info msg="Forcibly stopping sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\"" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.550 [WARNING][6093] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0", GenerateName:"calico-apiserver-6d96d7bfc7-", Namespace:"calico-system", SelfLink:"", UID:"b7c9590f-6caa-4345-9c90-9f8d06102e17", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d96d7bfc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"580fe4b9e546af246d62abf539570467afe9f0616a8704a1fb6aca9b9ac8f185", Pod:"calico-apiserver-6d96d7bfc7-92vsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7681e8dedf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.550 [INFO][6093] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.550 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" iface="eth0" netns="" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.550 [INFO][6093] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.550 [INFO][6093] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.574 [INFO][6101] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.574 [INFO][6101] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.574 [INFO][6101] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.580 [WARNING][6101] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.580 [INFO][6101] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" HandleID="k8s-pod-network.99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Workload="localhost-k8s-calico--apiserver--6d96d7bfc7--92vsq-eth0" Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.583 [INFO][6101] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:39:37.586347 containerd[1462]: 2026-04-21 10:39:37.584 [INFO][6093] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46" Apr 21 10:39:37.586958 containerd[1462]: time="2026-04-21T10:39:37.586388398Z" level=info msg="TearDown network for sandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" successfully" Apr 21 10:39:37.590266 containerd[1462]: time="2026-04-21T10:39:37.590179653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:39:37.590344 containerd[1462]: time="2026-04-21T10:39:37.590270661Z" level=info msg="RemovePodSandbox \"99d403629d7882916f2b2edbbcd296ecca98b6121af1d80a87b73a19bdb53a46\" returns successfully" Apr 21 10:39:41.888123 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Apr 21 10:39:41.920232 sshd[6128]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:41.921636 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:41.925361 systemd-logind[1440]: New session 19 of user core. Apr 21 10:39:41.932921 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:39:42.032923 sshd[6128]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:42.035725 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:42746.service: Deactivated successfully. Apr 21 10:39:42.037225 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:39:42.037858 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:39:42.038864 systemd-logind[1440]: Removed session 19. Apr 21 10:39:45.586547 kubelet[2510]: I0421 10:39:45.586055 2510 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d96d7bfc7-92vsq" podStartSLOduration=46.97627036 podStartE2EDuration="53.586017914s" podCreationTimestamp="2026-04-21 10:38:52 +0000 UTC" firstStartedPulling="2026-04-21 10:39:24.921910809 +0000 UTC m=+48.675242974" lastFinishedPulling="2026-04-21 10:39:31.531658364 +0000 UTC m=+55.284990528" observedRunningTime="2026-04-21 10:39:32.582307658 +0000 UTC m=+56.335639822" watchObservedRunningTime="2026-04-21 10:39:45.586017914 +0000 UTC m=+69.339350132" Apr 21 10:39:47.044599 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:38428.service - OpenSSH per-connection server daemon (10.0.0.1:38428). Apr 21 10:39:47.090442 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 38428 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:47.091737 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:47.098919 systemd-logind[1440]: New session 20 of user core. Apr 21 10:39:47.109167 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:39:47.283869 sshd[6167]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:47.287055 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:38428.service: Deactivated successfully. Apr 21 10:39:47.288500 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:39:47.289198 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:39:47.290220 systemd-logind[1440]: Removed session 20. Apr 21 10:39:52.300454 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:38438.service - OpenSSH per-connection server daemon (10.0.0.1:38438). Apr 21 10:39:52.334597 kubelet[2510]: E0421 10:39:52.334561 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:39:52.351013 sshd[6191]: Accepted publickey for core from 10.0.0.1 port 38438 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:39:52.352642 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:39:52.358116 systemd-logind[1440]: New session 21 of user core. Apr 21 10:39:52.366963 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:39:52.544398 sshd[6191]: pam_unix(sshd:session): session closed for user core Apr 21 10:39:52.547215 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:38438.service: Deactivated successfully. Apr 21 10:39:52.548812 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:39:52.549502 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:39:52.550265 systemd-logind[1440]: Removed session 21. Apr 21 10:39:53.006977 kubelet[2510]: I0421 10:39:53.006888 2510 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:39:53.338457 kubelet[2510]: E0421 10:39:53.338398 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"