Apr 25 00:05:58.854334 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 25 00:05:58.854352 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 25 00:05:58.854362 kernel: BIOS-provided physical RAM map: Apr 25 00:05:58.854367 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 25 00:05:58.854373 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 25 00:05:58.854378 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 25 00:05:58.854384 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 25 00:05:58.854389 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 25 00:05:58.854394 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 25 00:05:58.854399 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 25 00:05:58.854406 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 25 00:05:58.854411 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 25 00:05:58.854416 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 25 00:05:58.854421 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 25 00:05:58.854428 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 25 00:05:58.854433 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 25 00:05:58.854440 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 25 00:05:58.854445 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 25 00:05:58.854451 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 25 00:05:58.854456 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 25 00:05:58.854461 kernel: NX (Execute Disable) protection: active Apr 25 00:05:58.854467 kernel: APIC: Static calls initialized Apr 25 00:05:58.854472 kernel: efi: EFI v2.7 by EDK II Apr 25 00:05:58.854477 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 25 00:05:58.854483 kernel: SMBIOS 2.8 present. Apr 25 00:05:58.854488 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 25 00:05:58.854494 kernel: Hypervisor detected: KVM Apr 25 00:05:58.854500 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 25 00:05:58.854506 kernel: kvm-clock: using sched offset of 4670289540 cycles Apr 25 00:05:58.854512 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 25 00:05:58.854517 kernel: tsc: Detected 2793.438 MHz processor Apr 25 00:05:58.854523 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 25 00:05:58.854529 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 25 00:05:58.854535 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 25 00:05:58.854540 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 25 00:05:58.854546 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 25 00:05:58.854553 kernel: Using GB pages for direct mapping Apr 25 00:05:58.854558 kernel: Secure boot disabled Apr 25 00:05:58.854564 kernel: ACPI: Early table checksum verification disabled Apr 25 00:05:58.854570 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 25 00:05:58.854578 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 25 00:05:58.854584 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:05:58.854590 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:05:58.854597 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 25 00:05:58.854603 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:05:58.854638 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:05:58.854644 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:05:58.854650 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:05:58.854655 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 25 00:05:58.854659 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 25 00:05:58.854666 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 25 00:05:58.854671 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 25 00:05:58.854676 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 25 00:05:58.854680 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 25 00:05:58.854685 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 25 00:05:58.854690 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 25 00:05:58.854708 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 25 00:05:58.854713 kernel: No NUMA configuration found Apr 25 00:05:58.854718 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 25 00:05:58.854724 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 25 00:05:58.854729 kernel: Zone ranges: Apr 25 00:05:58.854734 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 25 00:05:58.854739 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 25 00:05:58.854744 kernel: Normal empty Apr 25 00:05:58.854749 kernel: Movable zone start for each node Apr 25 00:05:58.854754 kernel: Early memory node ranges Apr 25 00:05:58.854759 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 25 00:05:58.854764 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 25 00:05:58.854769 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 25 00:05:58.854775 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 25 00:05:58.854780 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 25 00:05:58.854785 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 25 00:05:58.854789 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 25 00:05:58.854794 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 25 00:05:58.854799 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 25 00:05:58.854804 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 25 00:05:58.854809 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 25 00:05:58.854814 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 25 00:05:58.854821 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 25 00:05:58.854826 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 25 00:05:58.854830 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 25 00:05:58.854835 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 25 00:05:58.854840 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 25 00:05:58.854845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 25 00:05:58.854850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 25 00:05:58.854855 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 25 00:05:58.854860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 25 00:05:58.854866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 25 00:05:58.854871 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 25 00:05:58.854876 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 25 00:05:58.854881 kernel: TSC deadline timer available Apr 25 00:05:58.854886 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 25 00:05:58.854891 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 25 00:05:58.854896 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 25 00:05:58.854900 kernel: kvm-guest: setup PV sched yield Apr 25 00:05:58.854905 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 25 00:05:58.854910 kernel: Booting paravirtualized kernel on KVM Apr 25 00:05:58.854917 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 25 00:05:58.854922 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 25 00:05:58.854927 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 25 00:05:58.854932 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 25 00:05:58.854937 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 25 00:05:58.854941 kernel: kvm-guest: PV spinlocks enabled Apr 25 00:05:58.854946 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 25 00:05:58.854952 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 25 00:05:58.854958 kernel: random: crng init done Apr 25 00:05:58.854963 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 25 00:05:58.854968 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 25 00:05:58.854973 kernel: Fallback order for Node 0: 0 Apr 25 00:05:58.854978 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 25 00:05:58.854983 kernel: Policy zone: DMA32 Apr 25 00:05:58.854988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 25 00:05:58.854993 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167136K reserved, 0K cma-reserved) Apr 25 00:05:58.854998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 25 00:05:58.855004 kernel: ftrace: allocating 37996 entries in 149 pages Apr 25 00:05:58.855009 kernel: ftrace: allocated 149 pages with 4 groups Apr 25 00:05:58.855014 kernel: Dynamic Preempt: voluntary Apr 25 00:05:58.855019 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 25 00:05:58.855031 kernel: rcu: RCU event tracing is enabled. Apr 25 00:05:58.855042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 25 00:05:58.855051 kernel: Trampoline variant of Tasks RCU enabled. Apr 25 00:05:58.855060 kernel: Rude variant of Tasks RCU enabled. Apr 25 00:05:58.855068 kernel: Tracing variant of Tasks RCU enabled. Apr 25 00:05:58.855077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 25 00:05:58.855086 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 25 00:05:58.855096 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 25 00:05:58.855108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 25 00:05:58.855118 kernel: Console: colour dummy device 80x25 Apr 25 00:05:58.855136 kernel: printk: console [ttyS0] enabled Apr 25 00:05:58.855142 kernel: ACPI: Core revision 20230628 Apr 25 00:05:58.855148 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 25 00:05:58.855155 kernel: APIC: Switch to symmetric I/O mode setup Apr 25 00:05:58.855161 kernel: x2apic enabled Apr 25 00:05:58.855166 kernel: APIC: Switched APIC routing to: physical x2apic Apr 25 00:05:58.855172 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 25 00:05:58.855177 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 25 00:05:58.855183 kernel: kvm-guest: setup PV IPIs Apr 25 00:05:58.855188 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 25 00:05:58.855194 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 25 00:05:58.855199 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 25 00:05:58.855206 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 25 00:05:58.855211 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 25 00:05:58.855217 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 25 00:05:58.855222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 25 00:05:58.855228 kernel: Spectre V2 : Mitigation: Retpolines Apr 25 00:05:58.855233 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 25 00:05:58.855239 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 25 00:05:58.855244 kernel: RETBleed: Vulnerable Apr 25 00:05:58.855251 kernel: Speculative Store Bypass: Vulnerable Apr 25 00:05:58.855256 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 25 00:05:58.855262 kernel: GDS: Unknown: Dependent on hypervisor status Apr 25 00:05:58.855267 kernel: active return thunk: its_return_thunk Apr 25 00:05:58.855273 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 25 00:05:58.855278 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 25 00:05:58.855284 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 25 00:05:58.855289 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 25 00:05:58.855294 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 25 00:05:58.855301 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 25 00:05:58.855307 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 25 00:05:58.855312 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 25 00:05:58.855317 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 25 00:05:58.855323 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 25 00:05:58.855328 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 25 00:05:58.855334 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 25 00:05:58.855339 kernel: Freeing SMP alternatives memory: 32K Apr 25 00:05:58.855344 kernel: pid_max: default: 32768 minimum: 301 Apr 25 00:05:58.855351 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 25 00:05:58.855356 kernel: landlock: Up and running. Apr 25 00:05:58.855362 kernel: SELinux: Initializing. Apr 25 00:05:58.855367 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 00:05:58.855373 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 00:05:58.855378 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 25 00:05:58.855384 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 25 00:05:58.855389 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 25 00:05:58.855395 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 25 00:05:58.855401 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 25 00:05:58.855407 kernel: signal: max sigframe size: 3632 Apr 25 00:05:58.855413 kernel: rcu: Hierarchical SRCU implementation. Apr 25 00:05:58.855418 kernel: rcu: Max phase no-delay instances is 400. Apr 25 00:05:58.855424 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 25 00:05:58.855429 kernel: smp: Bringing up secondary CPUs ... Apr 25 00:05:58.855434 kernel: smpboot: x86: Booting SMP configuration: Apr 25 00:05:58.855440 kernel: .... node #0, CPUs: #1 #2 #3 Apr 25 00:05:58.855445 kernel: smp: Brought up 1 node, 4 CPUs Apr 25 00:05:58.855452 kernel: smpboot: Max logical packages: 1 Apr 25 00:05:58.855457 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 25 00:05:58.855462 kernel: devtmpfs: initialized Apr 25 00:05:58.855468 kernel: x86/mm: Memory block size: 128MB Apr 25 00:05:58.855473 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 25 00:05:58.855479 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 25 00:05:58.855484 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 25 00:05:58.855490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 25 00:05:58.855495 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 25 00:05:58.855502 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 25 00:05:58.855507 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 25 00:05:58.855512 kernel: pinctrl core: initialized pinctrl subsystem Apr 25 00:05:58.855518 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 25 00:05:58.855523 kernel: audit: initializing netlink subsys (disabled) Apr 25 00:05:58.855529 kernel: audit: type=2000 audit(1777075558.225:1): state=initialized audit_enabled=0 res=1 Apr 25 00:05:58.855534 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 25 00:05:58.855539 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 25 00:05:58.855545 kernel: cpuidle: using governor menu Apr 25 00:05:58.855551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 25 00:05:58.855557 kernel: dca service started, version 1.12.1 Apr 25 00:05:58.855562 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 25 00:05:58.855568 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 25 00:05:58.855573 kernel: PCI: Using configuration type 1 for base access Apr 25 00:05:58.855578 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 25 00:05:58.855584 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 25 00:05:58.855589 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 25 00:05:58.855595 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 25 00:05:58.855601 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 25 00:05:58.855637 kernel: ACPI: Added _OSI(Module Device) Apr 25 00:05:58.855643 kernel: ACPI: Added _OSI(Processor Device) Apr 25 00:05:58.855649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 25 00:05:58.855654 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 25 00:05:58.855660 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 25 00:05:58.855665 kernel: ACPI: Interpreter enabled Apr 25 00:05:58.855671 kernel: ACPI: PM: (supports S0 S3 S5) Apr 25 00:05:58.855676 kernel: ACPI: Using IOAPIC for interrupt routing Apr 25 00:05:58.855683 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 25 00:05:58.855689 kernel: PCI: Using E820 reservations for host bridge windows Apr 25 00:05:58.855707 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 25 00:05:58.855713 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 25 00:05:58.855821 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 25 00:05:58.855884 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 25 00:05:58.855940 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 25 00:05:58.855949 kernel: PCI host bridge to bus 0000:00 Apr 25 00:05:58.856010 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 25 00:05:58.856060 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 25 00:05:58.856133 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 25 00:05:58.856183 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 25 00:05:58.856231 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 25 00:05:58.856279 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 25 00:05:58.856330 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 25 00:05:58.856401 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 25 00:05:58.856461 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 25 00:05:58.856518 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 25 00:05:58.856571 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 25 00:05:58.856656 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 25 00:05:58.856731 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 25 00:05:58.856789 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 25 00:05:58.856851 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 25 00:05:58.856908 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 25 00:05:58.856963 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 25 00:05:58.857019 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 25 00:05:58.857083 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 25 00:05:58.857141 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 25 00:05:58.857197 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 25 00:05:58.857252 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 25 00:05:58.857311 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 25 00:05:58.857366 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 25 00:05:58.857420 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 25 00:05:58.857475 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 25 00:05:58.857532 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 25 00:05:58.857591 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 25 00:05:58.857677 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 25 00:05:58.857757 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 25 00:05:58.857813 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 25 00:05:58.857868 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 25 00:05:58.857926 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 25 00:05:58.857984 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 25 00:05:58.857991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 25 00:05:58.857997 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 25 00:05:58.858002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 25 00:05:58.858008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 25 00:05:58.858013 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 25 00:05:58.858019 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 25 00:05:58.858024 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 25 00:05:58.858031 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 25 00:05:58.858037 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 25 00:05:58.858042 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 25 00:05:58.858048 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 25 00:05:58.858053 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 25 00:05:58.858059 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 25 00:05:58.858064 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 25 00:05:58.858070 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 25 00:05:58.858075 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 25 00:05:58.858082 kernel: iommu: Default domain type: Translated Apr 25 00:05:58.858088 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 25 00:05:58.858093 kernel: efivars: Registered efivars operations Apr 25 00:05:58.858099 kernel: PCI: Using ACPI for IRQ routing Apr 25 00:05:58.858104 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 25 00:05:58.858110 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 25 00:05:58.858115 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 25 00:05:58.858120 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 25 00:05:58.858126 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 25 00:05:58.858180 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 25 00:05:58.858234 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 25 00:05:58.858287 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 25 00:05:58.858294 kernel: vgaarb: loaded Apr 25 00:05:58.858300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 25 00:05:58.858306 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 25 00:05:58.858311 kernel: clocksource: Switched to clocksource kvm-clock Apr 25 00:05:58.858317 kernel: VFS: Disk quotas dquot_6.6.0 Apr 25 00:05:58.858322 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 25 00:05:58.858329 kernel: pnp: PnP ACPI init Apr 25 00:05:58.858390 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 25 00:05:58.858397 kernel: pnp: PnP ACPI: found 6 devices Apr 25 00:05:58.858403 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 25 00:05:58.858408 kernel: NET: Registered PF_INET protocol family Apr 25 00:05:58.858414 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 25 00:05:58.858420 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 25 00:05:58.858425 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 25 00:05:58.858433 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 25 00:05:58.858438 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 25 00:05:58.858444 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 25 00:05:58.858450 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 00:05:58.858455 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 00:05:58.858461 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 25 00:05:58.858466 kernel: NET: Registered PF_XDP protocol family Apr 25 00:05:58.858736 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 25 00:05:58.858800 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 25 00:05:58.858855 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 25 00:05:58.858904 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 25 00:05:58.858952 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 25 00:05:58.859001 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 25 00:05:58.859048 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 25 00:05:58.859095 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 25 00:05:58.859102 kernel: PCI: CLS 0 bytes, default 64 Apr 25 00:05:58.859108 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 25 00:05:58.859115 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 25 00:05:58.859121 kernel: Initialise system trusted keyrings Apr 25 00:05:58.859126 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 25 00:05:58.859132 kernel: Key type asymmetric registered Apr 25 00:05:58.859137 kernel: Asymmetric key parser 'x509' registered Apr 25 00:05:58.859142 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 25 00:05:58.859148 kernel: io scheduler mq-deadline registered Apr 25 00:05:58.859153 kernel: io scheduler kyber registered Apr 25 00:05:58.859160 kernel: io scheduler bfq registered Apr 25 00:05:58.859165 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 25 00:05:58.859171 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 25 00:05:58.859177 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 25 00:05:58.859182 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 25 00:05:58.859187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 25 00:05:58.859193 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 25 00:05:58.859198 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 25 00:05:58.859204 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 25 00:05:58.859210 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 25 00:05:58.859266 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 25 00:05:58.859273 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 25 00:05:58.859322 kernel: rtc_cmos 00:04: registered as rtc0 Apr 25 00:05:58.859372 kernel: rtc_cmos 00:04: setting system clock to 2026-04-25T00:05:58 UTC (1777075558) Apr 25 00:05:58.859422 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 25 00:05:58.859428 kernel: intel_pstate: CPU model not supported Apr 25 00:05:58.859434 kernel: efifb: probing for efifb Apr 25 00:05:58.859441 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 25 00:05:58.859446 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 25 00:05:58.859452 kernel: efifb: scrolling: redraw Apr 25 00:05:58.859457 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 25 00:05:58.859462 kernel: Console: switching to colour frame buffer device 100x37 Apr 25 00:05:58.859480 kernel: fb0: EFI VGA frame buffer device Apr 25 00:05:58.859505 kernel: pstore: Using crash dump compression: deflate Apr 25 00:05:58.859520 kernel: pstore: Registered efi_pstore as persistent store backend Apr 25 00:05:58.859533 kernel: NET: Registered PF_INET6 protocol family Apr 25 00:05:58.859547 kernel: Segment Routing with IPv6 Apr 25 00:05:58.859560 kernel: In-situ OAM (IOAM) with IPv6 Apr 25 00:05:58.859653 kernel: NET: Registered PF_PACKET protocol family Apr 25 00:05:58.859668 kernel: Key type dns_resolver registered Apr 25 00:05:58.859688 kernel: IPI shorthand broadcast: enabled Apr 25 00:05:58.859708 kernel: sched_clock: Marking stable (737013357, 206849792)->(999315037, -55451888) Apr 25 00:05:58.859725 kernel: registered taskstats version 1 Apr 25 00:05:58.859738 kernel: Loading compiled-in X.509 certificates Apr 25 00:05:58.859752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 25 00:05:58.859773 kernel: Key type .fscrypt registered Apr 25 00:05:58.859786 kernel: Key type fscrypt-provisioning registered Apr 25 00:05:58.859799 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 25 00:05:58.859812 kernel: ima: Allocated hash algorithm: sha1 Apr 25 00:05:58.859817 kernel: ima: No architecture policies found Apr 25 00:05:58.859830 kernel: clk: Disabling unused clocks Apr 25 00:05:58.859836 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 25 00:05:58.859842 kernel: Write protecting the kernel read-only data: 36864k Apr 25 00:05:58.859855 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 25 00:05:58.859860 kernel: Run /init as init process Apr 25 00:05:58.859875 kernel: with arguments: Apr 25 00:05:58.859894 kernel: /init Apr 25 00:05:58.859907 kernel: with environment: Apr 25 00:05:58.859913 kernel: HOME=/ Apr 25 00:05:58.859918 kernel: TERM=linux Apr 25 00:05:58.859926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 00:05:58.859934 systemd[1]: Detected virtualization kvm. Apr 25 00:05:58.859941 systemd[1]: Detected architecture x86-64. Apr 25 00:05:58.859950 systemd[1]: Running in initrd. Apr 25 00:05:58.859956 systemd[1]: No hostname configured, using default hostname. Apr 25 00:05:58.859962 systemd[1]: Hostname set to . Apr 25 00:05:58.859968 systemd[1]: Initializing machine ID from VM UUID. Apr 25 00:05:58.859975 systemd[1]: Queued start job for default target initrd.target. Apr 25 00:05:58.859981 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:05:58.859987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:05:58.859994 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 25 00:05:58.860000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 00:05:58.860006 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 25 00:05:58.860012 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 25 00:05:58.860020 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 25 00:05:58.860037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 25 00:05:58.860043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:05:58.860049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:05:58.860064 systemd[1]: Reached target paths.target - Path Units. Apr 25 00:05:58.860070 systemd[1]: Reached target slices.target - Slice Units. Apr 25 00:05:58.860076 systemd[1]: Reached target swap.target - Swaps. Apr 25 00:05:58.860082 systemd[1]: Reached target timers.target - Timer Units. Apr 25 00:05:58.860090 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 00:05:58.860096 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 00:05:58.860102 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 25 00:05:58.860108 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 25 00:05:58.860114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:05:58.860120 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 00:05:58.860125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:05:58.860132 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 00:05:58.860138 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 25 00:05:58.860145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 00:05:58.860151 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 25 00:05:58.860157 systemd[1]: Starting systemd-fsck-usr.service... Apr 25 00:05:58.860163 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 00:05:58.860169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 00:05:58.860175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:05:58.860181 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 25 00:05:58.860200 systemd-journald[194]: Collecting audit messages is disabled. Apr 25 00:05:58.860217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:05:58.860223 systemd[1]: Finished systemd-fsck-usr.service. Apr 25 00:05:58.860231 systemd-journald[194]: Journal started Apr 25 00:05:58.860246 systemd-journald[194]: Runtime Journal (/run/log/journal/4c51e0e289f3428999dca63656e93f74) is 6.0M, max 48.3M, 42.2M free. Apr 25 00:05:58.860079 systemd-modules-load[195]: Inserted module 'overlay' Apr 25 00:05:58.870587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 00:05:58.870628 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 00:05:58.873521 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 00:05:58.878910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 00:05:58.883678 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 00:05:58.887863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:05:58.891404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 25 00:05:58.893955 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 25 00:05:58.897027 kernel: Bridge firewalling registered Apr 25 00:05:58.894177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:05:58.897144 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 00:05:58.900415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:05:58.912810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 00:05:58.914162 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:05:58.924577 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:05:58.926450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 00:05:58.928545 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:05:58.929745 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 25 00:05:58.944400 dracut-cmdline[234]: dracut-dracut-053 Apr 25 00:05:58.947235 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 25 00:05:58.952464 systemd-resolved[231]: Positive Trust Anchors: Apr 25 00:05:58.952471 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 00:05:58.952496 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 00:05:58.954342 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 25 00:05:58.955059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 00:05:58.958289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:05:59.028675 kernel: SCSI subsystem initialized Apr 25 00:05:59.036656 kernel: Loading iSCSI transport class v2.0-870. Apr 25 00:05:59.046664 kernel: iscsi: registered transport (tcp) Apr 25 00:05:59.064654 kernel: iscsi: registered transport (qla4xxx) Apr 25 00:05:59.064719 kernel: QLogic iSCSI HBA Driver Apr 25 00:05:59.095186 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 25 00:05:59.107845 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 25 00:05:59.132670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 25 00:05:59.132740 kernel: device-mapper: uevent: version 1.0.3 Apr 25 00:05:59.132754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 25 00:05:59.171658 kernel: raid6: avx512x4 gen() 47408 MB/s Apr 25 00:05:59.188651 kernel: raid6: avx512x2 gen() 46379 MB/s Apr 25 00:05:59.205650 kernel: raid6: avx512x1 gen() 46239 MB/s Apr 25 00:05:59.222656 kernel: raid6: avx2x4 gen() 38292 MB/s Apr 25 00:05:59.239649 kernel: raid6: avx2x2 gen() 37904 MB/s Apr 25 00:05:59.257297 kernel: raid6: avx2x1 gen() 28655 MB/s Apr 25 00:05:59.257315 kernel: raid6: using algorithm avx512x4 gen() 47408 MB/s Apr 25 00:05:59.275471 kernel: raid6: .... xor() 10415 MB/s, rmw enabled Apr 25 00:05:59.275567 kernel: raid6: using avx512x2 recovery algorithm Apr 25 00:05:59.293654 kernel: xor: automatically using best checksumming function avx Apr 25 00:05:59.419686 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 25 00:05:59.428712 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 25 00:05:59.433867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:05:59.443960 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 25 00:05:59.446694 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:05:59.459851 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 25 00:05:59.473862 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Apr 25 00:05:59.497469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 00:05:59.517973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 00:05:59.549545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:05:59.556855 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 25 00:05:59.566600 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 25 00:05:59.572387 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 00:05:59.575242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:05:59.582446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 00:05:59.597147 kernel: cryptd: max_cpu_qlen set to 1000 Apr 25 00:05:59.597191 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 25 00:05:59.597460 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 25 00:05:59.609273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 00:05:59.615868 kernel: AVX2 version of gcm_enc/dec engaged. Apr 25 00:05:59.615899 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 25 00:05:59.616060 kernel: AES CTR mode by8 optimization enabled Apr 25 00:05:59.610196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:05:59.623646 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 25 00:05:59.623662 kernel: GPT:9289727 != 19775487 Apr 25 00:05:59.623669 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 25 00:05:59.623676 kernel: GPT:9289727 != 19775487 Apr 25 00:05:59.623682 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 25 00:05:59.623695 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:05:59.628067 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 00:05:59.629884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:05:59.630097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:05:59.630940 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:05:59.644640 kernel: libata version 3.00 loaded. Apr 25 00:05:59.652032 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Apr 25 00:05:59.650938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:05:59.662276 kernel: ahci 0000:00:1f.2: version 3.0 Apr 25 00:05:59.662399 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 25 00:05:59.662409 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 25 00:05:59.662478 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (467) Apr 25 00:05:59.662486 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 25 00:05:59.662552 kernel: scsi host0: ahci Apr 25 00:05:59.664126 kernel: scsi host1: ahci Apr 25 00:05:59.664211 kernel: scsi host2: ahci Apr 25 00:05:59.654635 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 25 00:05:59.668064 kernel: scsi host3: ahci Apr 25 00:05:59.668374 kernel: scsi host4: ahci Apr 25 00:05:59.670671 kernel: scsi host5: ahci Apr 25 00:05:59.670812 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 25 00:05:59.670822 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 25 00:05:59.673040 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 25 00:05:59.673084 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 25 00:05:59.675431 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 25 00:05:59.676645 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 25 00:05:59.678003 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 25 00:05:59.692901 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 25 00:05:59.697888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 25 00:05:59.701863 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 25 00:05:59.702413 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 25 00:05:59.717940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 25 00:05:59.721657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:05:59.721715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:05:59.729398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:05:59.729417 disk-uuid[570]: Primary Header is updated. Apr 25 00:05:59.729417 disk-uuid[570]: Secondary Entries is updated. Apr 25 00:05:59.729417 disk-uuid[570]: Secondary Header is updated. Apr 25 00:05:59.725694 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:05:59.727552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:05:59.740237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:05:59.750880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 00:05:59.766144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:06:00.013332 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 25 00:06:00.013414 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 25 00:06:00.013438 kernel: ata3.00: applying bridge limits Apr 25 00:06:00.014959 kernel: ata3.00: configured for UDMA/100 Apr 25 00:06:00.018725 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 25 00:06:00.018752 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 25 00:06:00.018758 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 25 00:06:00.019840 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 25 00:06:00.019869 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 25 00:06:00.020642 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 25 00:06:00.062009 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 25 00:06:00.062221 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 25 00:06:00.080653 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 25 00:06:00.738658 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:06:00.739126 disk-uuid[572]: The operation has completed successfully. Apr 25 00:06:00.764169 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 25 00:06:00.764276 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 25 00:06:00.787891 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 25 00:06:00.791685 sh[603]: Success Apr 25 00:06:00.802673 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 25 00:06:00.833456 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 25 00:06:00.848011 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 25 00:06:00.850826 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 25 00:06:00.861678 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 25 00:06:00.861738 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:06:00.863418 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 25 00:06:00.863457 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 25 00:06:00.864769 kernel: BTRFS info (device dm-0): using free space tree Apr 25 00:06:00.871441 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 25 00:06:00.874484 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 25 00:06:00.887822 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 25 00:06:00.891108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 25 00:06:00.899057 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:06:00.899094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:06:00.899102 kernel: BTRFS info (device vda6): using free space tree Apr 25 00:06:00.903646 kernel: BTRFS info (device vda6): auto enabling async discard Apr 25 00:06:00.911027 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 25 00:06:00.914072 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:06:00.921473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 25 00:06:00.935890 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 25 00:06:00.982389 ignition[695]: Ignition 2.19.0 Apr 25 00:06:00.982405 ignition[695]: Stage: fetch-offline Apr 25 00:06:00.982433 ignition[695]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:06:00.982440 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:06:00.982540 ignition[695]: parsed url from cmdline: "" Apr 25 00:06:00.982543 ignition[695]: no config URL provided Apr 25 00:06:00.982547 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 00:06:00.982553 ignition[695]: no config at "/usr/lib/ignition/user.ign" Apr 25 00:06:00.982572 ignition[695]: op(1): [started] loading QEMU firmware config module Apr 25 00:06:00.982578 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 25 00:06:00.988576 ignition[695]: op(1): [finished] loading QEMU firmware config module Apr 25 00:06:01.000360 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 00:06:01.010852 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 00:06:01.026492 systemd-networkd[792]: lo: Link UP Apr 25 00:06:01.026514 systemd-networkd[792]: lo: Gained carrier Apr 25 00:06:01.027450 systemd-networkd[792]: Enumeration completed Apr 25 00:06:01.027983 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:06:01.027985 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 00:06:01.028588 systemd-networkd[792]: eth0: Link UP Apr 25 00:06:01.028591 systemd-networkd[792]: eth0: Gained carrier Apr 25 00:06:01.028598 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:06:01.028696 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 00:06:01.031973 systemd[1]: Reached target network.target - Network. Apr 25 00:06:01.048688 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 25 00:06:01.112932 ignition[695]: parsing config with SHA512: 2353aa2e602f5b43986f9ca1782a24bd6bc87795e8b6f33830ea7ca94105adeb6cb0cba6d0ca18e2ae73873f4820d94a55f78feb233c4ec0ef1a41c56aae37c2 Apr 25 00:06:01.115814 unknown[695]: fetched base config from "system" Apr 25 00:06:01.115825 unknown[695]: fetched user config from "qemu" Apr 25 00:06:01.116107 ignition[695]: fetch-offline: fetch-offline passed Apr 25 00:06:01.117722 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 00:06:01.116152 ignition[695]: Ignition finished successfully Apr 25 00:06:01.120434 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 25 00:06:01.127819 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 25 00:06:01.137469 ignition[796]: Ignition 2.19.0 Apr 25 00:06:01.137481 ignition[796]: Stage: kargs Apr 25 00:06:01.137634 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:06:01.137642 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:06:01.138218 ignition[796]: kargs: kargs passed Apr 25 00:06:01.138245 ignition[796]: Ignition finished successfully Apr 25 00:06:01.144785 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 25 00:06:01.153762 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 25 00:06:01.164297 ignition[804]: Ignition 2.19.0 Apr 25 00:06:01.164309 ignition[804]: Stage: disks Apr 25 00:06:01.164454 ignition[804]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:06:01.164461 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:06:01.165092 ignition[804]: disks: disks passed Apr 25 00:06:01.165121 ignition[804]: Ignition finished successfully Apr 25 00:06:01.170346 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 25 00:06:01.173923 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 25 00:06:01.177040 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 25 00:06:01.177589 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 00:06:01.180980 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 00:06:01.183294 systemd[1]: Reached target basic.target - Basic System. Apr 25 00:06:01.193888 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 25 00:06:01.203640 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 25 00:06:01.207255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 25 00:06:01.208743 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 25 00:06:01.286649 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 25 00:06:01.287036 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 25 00:06:01.289686 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 25 00:06:01.302705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 00:06:01.307176 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 25 00:06:01.308221 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 25 00:06:01.314793 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Apr 25 00:06:01.308249 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 25 00:06:01.320011 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:06:01.320034 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:06:01.320048 kernel: BTRFS info (device vda6): using free space tree Apr 25 00:06:01.308267 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 00:06:01.317197 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 25 00:06:01.323395 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 25 00:06:01.331645 kernel: BTRFS info (device vda6): auto enabling async discard Apr 25 00:06:01.332179 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 00:06:01.355825 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Apr 25 00:06:01.362515 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 25 00:06:01.369025 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 25 00:06:01.373825 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 25 00:06:01.459832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 25 00:06:01.476009 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 25 00:06:01.482056 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 25 00:06:01.524982 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:06:01.554028 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 25 00:06:01.556665 ignition[936]: INFO : Ignition 2.19.0 Apr 25 00:06:01.556665 ignition[936]: INFO : Stage: mount Apr 25 00:06:01.556665 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:06:01.556665 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:06:01.556665 ignition[936]: INFO : mount: mount passed Apr 25 00:06:01.556665 ignition[936]: INFO : Ignition finished successfully Apr 25 00:06:01.556870 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 25 00:06:01.564737 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 25 00:06:01.860279 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 25 00:06:01.868965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 00:06:01.874652 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (949) Apr 25 00:06:01.877530 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:06:01.877547 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:06:01.877556 kernel: BTRFS info (device vda6): using free space tree Apr 25 00:06:01.881644 kernel: BTRFS info (device vda6): auto enabling async discard Apr 25 00:06:01.882290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 00:06:01.906026 ignition[966]: INFO : Ignition 2.19.0 Apr 25 00:06:01.906026 ignition[966]: INFO : Stage: files Apr 25 00:06:01.906026 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:06:01.906026 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:06:01.906026 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Apr 25 00:06:01.913220 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 25 00:06:01.913220 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 25 00:06:01.917993 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 25 00:06:01.917993 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 25 00:06:01.917993 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 25 00:06:01.917993 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 25 00:06:01.917993 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 25 00:06:01.914296 unknown[966]: wrote ssh authorized keys file for user: core Apr 25 00:06:01.977952 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 25 00:06:02.085246 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 25 00:06:02.085246 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 25 00:06:02.091110 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 25 00:06:02.094157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 25 00:06:02.097283 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 25 00:06:02.100487 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 00:06:02.103249 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 00:06:02.105598 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 00:06:02.108111 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 00:06:02.110628 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 00:06:02.113117 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 00:06:02.115589 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 25 00:06:02.119906 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 25 00:06:02.123517 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 25 00:06:02.123517 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 25 00:06:02.275088 systemd-networkd[792]: eth0: Gained IPv6LL Apr 25 00:06:02.393958 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 25 00:06:03.173225 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 25 00:06:03.173225 ignition[966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 25 00:06:03.178321 ignition[966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 00:06:03.181292 ignition[966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 00:06:03.183859 ignition[966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 25 00:06:03.183859 ignition[966]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 25 00:06:03.187245 ignition[966]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 25 00:06:03.190007 ignition[966]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 25 00:06:03.190007 ignition[966]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 25 00:06:03.190007 ignition[966]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 25 00:06:03.220391 ignition[966]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 25 00:06:03.223909 ignition[966]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 25 00:06:03.226203 ignition[966]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 25 00:06:03.226203 ignition[966]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 25 00:06:03.230129 ignition[966]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 25 00:06:03.230129 ignition[966]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 25 00:06:03.230129 ignition[966]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 25 00:06:03.230129 ignition[966]: INFO : files: files passed Apr 25 00:06:03.230129 ignition[966]: INFO : Ignition finished successfully Apr 25 00:06:03.239340 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 25 00:06:03.254797 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 25 00:06:03.258320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 25 00:06:03.259144 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 25 00:06:03.259217 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 25 00:06:03.272890 initrd-setup-root-after-ignition[995]: grep: /sysroot/oem/oem-release: No such file or directory Apr 25 00:06:03.276770 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:06:03.276770 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:06:03.280996 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:06:03.284091 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 00:06:03.285095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 25 00:06:03.300780 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 25 00:06:03.321512 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 25 00:06:03.321640 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 25 00:06:03.324046 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 25 00:06:03.327989 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 25 00:06:03.328582 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 25 00:06:03.329281 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 25 00:06:03.346887 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 00:06:03.348885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 25 00:06:03.362014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:06:03.363083 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:06:03.366298 systemd[1]: Stopped target timers.target - Timer Units. Apr 25 00:06:03.369511 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 25 00:06:03.369656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 00:06:03.374598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 25 00:06:03.377504 systemd[1]: Stopped target basic.target - Basic System. Apr 25 00:06:03.378273 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 25 00:06:03.381529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 00:06:03.385590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 25 00:06:03.388908 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 25 00:06:03.392356 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 00:06:03.395571 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 25 00:06:03.398856 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 25 00:06:03.402152 systemd[1]: Stopped target swap.target - Swaps. Apr 25 00:06:03.404481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 25 00:06:03.404652 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 25 00:06:03.408087 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:06:03.411029 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:06:03.413862 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 25 00:06:03.416691 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:06:03.417342 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 25 00:06:03.417432 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 25 00:06:03.423217 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 25 00:06:03.423351 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 00:06:03.425181 systemd[1]: Stopped target paths.target - Path Units. Apr 25 00:06:03.427820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 25 00:06:03.433748 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:06:03.434434 systemd[1]: Stopped target slices.target - Slice Units. Apr 25 00:06:03.438154 systemd[1]: Stopped target sockets.target - Socket Units. Apr 25 00:06:03.440236 systemd[1]: iscsid.socket: Deactivated successfully. Apr 25 00:06:03.440310 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 00:06:03.442579 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 25 00:06:03.442668 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 00:06:03.445147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 25 00:06:03.445261 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 00:06:03.447567 systemd[1]: ignition-files.service: Deactivated successfully. Apr 25 00:06:03.447674 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 25 00:06:03.464832 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 25 00:06:03.467922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 25 00:06:03.468043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:06:03.472857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 25 00:06:03.475557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 25 00:06:03.475681 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:06:03.480393 ignition[1021]: INFO : Ignition 2.19.0 Apr 25 00:06:03.480393 ignition[1021]: INFO : Stage: umount Apr 25 00:06:03.483652 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:06:03.483652 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:06:03.483652 ignition[1021]: INFO : umount: umount passed Apr 25 00:06:03.483652 ignition[1021]: INFO : Ignition finished successfully Apr 25 00:06:03.482234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 25 00:06:03.485174 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 00:06:03.496301 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 25 00:06:03.498328 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 25 00:06:03.499684 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 25 00:06:03.503873 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 25 00:06:03.503998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 25 00:06:03.507223 systemd[1]: Stopped target network.target - Network. Apr 25 00:06:03.511260 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 25 00:06:03.511348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 25 00:06:03.514017 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 25 00:06:03.514067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 25 00:06:03.514976 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 25 00:06:03.515020 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 25 00:06:03.520393 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 25 00:06:03.520431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 25 00:06:03.523121 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 25 00:06:03.524130 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 25 00:06:03.533323 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 25 00:06:03.533421 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 25 00:06:03.535972 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 25 00:06:03.536012 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:06:03.537708 systemd-networkd[792]: eth0: DHCPv6 lease lost Apr 25 00:06:03.540647 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 25 00:06:03.540792 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 25 00:06:03.541690 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 25 00:06:03.541753 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:06:03.565987 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 25 00:06:03.569318 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 25 00:06:03.569389 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 00:06:03.573181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 00:06:03.573223 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:06:03.576648 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 25 00:06:03.576691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 25 00:06:03.581744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:06:03.584792 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 25 00:06:03.584896 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 25 00:06:03.594519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 25 00:06:03.594595 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 25 00:06:03.597717 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 25 00:06:03.597809 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 25 00:06:03.607818 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 25 00:06:03.607960 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:06:03.611592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 25 00:06:03.611657 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 25 00:06:03.613991 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 25 00:06:03.614016 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:06:03.616437 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 25 00:06:03.616469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 25 00:06:03.621649 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 25 00:06:03.621681 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 25 00:06:03.625153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 00:06:03.625188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:06:03.640850 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 25 00:06:03.642829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 25 00:06:03.644465 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:06:03.647800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:06:03.649574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:06:03.650707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 25 00:06:03.650810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 25 00:06:03.655757 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 25 00:06:03.657015 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 25 00:06:03.668399 systemd[1]: Switching root. Apr 25 00:06:03.699071 systemd-journald[194]: Journal stopped Apr 25 00:06:04.362668 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 25 00:06:04.362716 kernel: SELinux: policy capability network_peer_controls=1 Apr 25 00:06:04.362727 kernel: SELinux: policy capability open_perms=1 Apr 25 00:06:04.362751 kernel: SELinux: policy capability extended_socket_class=1 Apr 25 00:06:04.362763 kernel: SELinux: policy capability always_check_network=0 Apr 25 00:06:04.362770 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 25 00:06:04.362781 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 25 00:06:04.362789 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 25 00:06:04.362796 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 25 00:06:04.362804 kernel: audit: type=1403 audit(1777075563.808:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 25 00:06:04.362815 systemd[1]: Successfully loaded SELinux policy in 39.532ms. Apr 25 00:06:04.362832 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.677ms. Apr 25 00:06:04.362841 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 00:06:04.362849 systemd[1]: Detected virtualization kvm. Apr 25 00:06:04.362859 systemd[1]: Detected architecture x86-64. Apr 25 00:06:04.362868 systemd[1]: Detected first boot. Apr 25 00:06:04.362876 systemd[1]: Initializing machine ID from VM UUID. Apr 25 00:06:04.362885 zram_generator::config[1065]: No configuration found. Apr 25 00:06:04.362894 systemd[1]: Populated /etc with preset unit settings. Apr 25 00:06:04.362902 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 25 00:06:04.362913 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 25 00:06:04.362921 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 25 00:06:04.362929 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 25 00:06:04.362939 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 25 00:06:04.362947 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 25 00:06:04.362955 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 25 00:06:04.362962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 25 00:06:04.362970 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 25 00:06:04.362978 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 25 00:06:04.362985 systemd[1]: Created slice user.slice - User and Session Slice. Apr 25 00:06:04.362995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:06:04.363004 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:06:04.363013 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 25 00:06:04.363021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 25 00:06:04.363028 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 25 00:06:04.363037 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 00:06:04.363044 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 25 00:06:04.363052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:06:04.363060 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 25 00:06:04.363069 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 25 00:06:04.363078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 25 00:06:04.363086 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 25 00:06:04.363094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:06:04.363103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 00:06:04.363110 systemd[1]: Reached target slices.target - Slice Units. Apr 25 00:06:04.363118 systemd[1]: Reached target swap.target - Swaps. Apr 25 00:06:04.363126 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 25 00:06:04.363134 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 25 00:06:04.363143 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:06:04.363151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 00:06:04.363159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:06:04.363166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 25 00:06:04.363174 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 25 00:06:04.363181 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 25 00:06:04.363189 systemd[1]: Mounting media.mount - External Media Directory... Apr 25 00:06:04.363197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:04.363205 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 25 00:06:04.363214 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 25 00:06:04.363222 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 25 00:06:04.363231 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 25 00:06:04.363238 systemd[1]: Reached target machines.target - Containers. Apr 25 00:06:04.363246 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 25 00:06:04.363253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:06:04.363261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 00:06:04.363269 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 25 00:06:04.363276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:06:04.363286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 00:06:04.363294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:06:04.363302 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 25 00:06:04.363309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:06:04.363317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 25 00:06:04.363324 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 25 00:06:04.363331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 25 00:06:04.363339 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 25 00:06:04.363348 systemd[1]: Stopped systemd-fsck-usr.service. Apr 25 00:06:04.363355 kernel: fuse: init (API version 7.39) Apr 25 00:06:04.363362 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 00:06:04.363371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 00:06:04.363379 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 25 00:06:04.363387 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 25 00:06:04.363394 kernel: ACPI: bus type drm_connector registered Apr 25 00:06:04.363402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 00:06:04.363421 systemd-journald[1142]: Collecting audit messages is disabled. Apr 25 00:06:04.363438 kernel: loop: module loaded Apr 25 00:06:04.363446 systemd-journald[1142]: Journal started Apr 25 00:06:04.363463 systemd-journald[1142]: Runtime Journal (/run/log/journal/4c51e0e289f3428999dca63656e93f74) is 6.0M, max 48.3M, 42.2M free. Apr 25 00:06:04.118925 systemd[1]: Queued start job for default target multi-user.target. Apr 25 00:06:04.136954 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 25 00:06:04.137267 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 25 00:06:04.369352 systemd[1]: verity-setup.service: Deactivated successfully. Apr 25 00:06:04.369385 systemd[1]: Stopped verity-setup.service. Apr 25 00:06:04.369395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:04.375898 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 00:06:04.376269 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 25 00:06:04.377790 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 25 00:06:04.379358 systemd[1]: Mounted media.mount - External Media Directory. Apr 25 00:06:04.380816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 25 00:06:04.382382 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 25 00:06:04.383958 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 25 00:06:04.385446 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 25 00:06:04.387215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:06:04.389060 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 25 00:06:04.389175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 25 00:06:04.390953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:06:04.391066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:06:04.392817 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 00:06:04.392924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 00:06:04.394524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:06:04.394804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:06:04.396812 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 25 00:06:04.396908 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 25 00:06:04.398528 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:06:04.398777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:06:04.400432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 00:06:04.402155 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 25 00:06:04.404021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 25 00:06:04.410680 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:06:04.415347 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 25 00:06:04.429752 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 25 00:06:04.432536 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 25 00:06:04.434077 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 25 00:06:04.434105 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 00:06:04.436171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 25 00:06:04.438589 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 25 00:06:04.440906 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 25 00:06:04.442399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:06:04.444815 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 25 00:06:04.447255 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 25 00:06:04.448919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:06:04.451722 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 25 00:06:04.453408 systemd-journald[1142]: Time spent on flushing to /var/log/journal/4c51e0e289f3428999dca63656e93f74 is 12.256ms for 991 entries. Apr 25 00:06:04.453408 systemd-journald[1142]: System Journal (/var/log/journal/4c51e0e289f3428999dca63656e93f74) is 8.0M, max 195.6M, 187.6M free. Apr 25 00:06:04.475679 systemd-journald[1142]: Received client request to flush runtime journal. Apr 25 00:06:04.453290 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:06:04.454007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:06:04.458369 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 25 00:06:04.468110 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 25 00:06:04.473405 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 25 00:06:04.476774 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 25 00:06:04.478639 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 25 00:06:04.481038 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 25 00:06:04.483137 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 25 00:06:04.485908 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 25 00:06:04.492646 kernel: loop0: detected capacity change from 0 to 142488 Apr 25 00:06:04.493207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 25 00:06:04.500243 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 25 00:06:04.506794 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 25 00:06:04.508861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:06:04.622642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 25 00:06:04.631946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 25 00:06:04.632848 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 25 00:06:04.636059 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 25 00:06:04.644804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 00:06:04.649640 kernel: loop1: detected capacity change from 0 to 140768 Apr 25 00:06:04.663009 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 25 00:06:04.663025 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 25 00:06:04.666290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:06:04.692643 kernel: loop2: detected capacity change from 0 to 217752 Apr 25 00:06:04.725088 kernel: loop3: detected capacity change from 0 to 142488 Apr 25 00:06:04.799695 kernel: loop4: detected capacity change from 0 to 140768 Apr 25 00:06:04.813126 kernel: loop5: detected capacity change from 0 to 217752 Apr 25 00:06:04.821756 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 25 00:06:04.822127 (sd-merge)[1208]: Merged extensions into '/usr'. Apr 25 00:06:04.825977 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Apr 25 00:06:04.826004 systemd[1]: Reloading... Apr 25 00:06:04.863820 zram_generator::config[1231]: No configuration found. Apr 25 00:06:05.011034 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 25 00:06:05.020045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:06:05.050447 systemd[1]: Reloading finished in 224 ms. Apr 25 00:06:05.078467 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 25 00:06:05.080478 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 25 00:06:05.101268 systemd[1]: Starting ensure-sysext.service... Apr 25 00:06:05.103441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 00:06:05.107791 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Apr 25 00:06:05.107811 systemd[1]: Reloading... Apr 25 00:06:05.130858 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 25 00:06:05.131114 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 25 00:06:05.131729 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 25 00:06:05.131911 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Apr 25 00:06:05.131952 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Apr 25 00:06:05.136511 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 00:06:05.136520 systemd-tmpfiles[1272]: Skipping /boot Apr 25 00:06:05.179377 kernel: hrtimer: interrupt took 3381434 ns Apr 25 00:06:05.304982 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 00:06:05.305013 systemd-tmpfiles[1272]: Skipping /boot Apr 25 00:06:05.338279 zram_generator::config[1296]: No configuration found. Apr 25 00:06:05.402754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:06:05.430998 systemd[1]: Reloading finished in 322 ms. Apr 25 00:06:05.443912 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 25 00:06:05.458040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:06:05.465273 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 00:06:05.467939 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 25 00:06:05.469843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 25 00:06:05.474524 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 00:06:05.478353 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:06:05.485879 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 25 00:06:05.490340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:05.490540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:06:05.492697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:06:05.505333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:06:05.509283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:06:05.509545 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Apr 25 00:06:05.510897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:06:05.512884 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 25 00:06:05.514297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:05.514404 augenrules[1361]: No rules Apr 25 00:06:05.515229 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 00:06:05.518794 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 25 00:06:05.521241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:06:05.521366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:06:05.523459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:06:05.523722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:06:05.526130 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:06:05.526231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:06:05.533905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 25 00:06:05.539806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:06:05.542733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:05.542881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:06:05.551872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:06:05.554305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:06:05.561685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:06:05.563171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:06:05.564544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 00:06:05.567038 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 25 00:06:05.568509 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 25 00:06:05.568580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:05.569317 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 25 00:06:05.571930 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 25 00:06:05.574464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:06:05.574582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:06:05.581992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:06:05.582138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:06:05.584467 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:06:05.584598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:06:05.597089 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:06:05.598766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:06:05.600457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:05.600573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:06:05.607633 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 25 00:06:05.608243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:06:05.614131 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 00:06:05.616474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:06:05.621094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:06:05.622802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:06:05.622896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 25 00:06:05.622949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:06:05.623527 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 25 00:06:05.626403 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 25 00:06:05.627647 kernel: ACPI: button: Power Button [PWRF] Apr 25 00:06:05.628025 systemd-resolved[1341]: Positive Trust Anchors: Apr 25 00:06:05.628034 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 00:06:05.628058 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 00:06:05.628556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:06:05.629158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:06:05.631192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:06:05.633368 systemd[1]: Finished ensure-sysext.service. Apr 25 00:06:05.636168 systemd-resolved[1341]: Defaulting to hostname 'linux'. Apr 25 00:06:05.637891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1394) Apr 25 00:06:05.641840 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 25 00:06:05.646038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 00:06:05.647889 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:06:05.648015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:06:05.652511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:06:05.654701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:06:05.656536 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 00:06:05.656789 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 00:06:05.663486 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 25 00:06:05.663736 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 25 00:06:05.663885 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 25 00:06:05.663977 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 25 00:06:05.681578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:06:05.684649 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 25 00:06:05.682061 systemd-networkd[1400]: lo: Link UP Apr 25 00:06:05.682067 systemd-networkd[1400]: lo: Gained carrier Apr 25 00:06:05.683132 systemd-networkd[1400]: Enumeration completed Apr 25 00:06:05.683588 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:06:05.683590 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 00:06:05.684755 systemd-networkd[1400]: eth0: Link UP Apr 25 00:06:05.684758 systemd-networkd[1400]: eth0: Gained carrier Apr 25 00:06:05.684768 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:06:05.685317 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:06:05.685392 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 00:06:05.687122 systemd[1]: Reached target network.target - Network. Apr 25 00:06:05.696336 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 25 00:06:05.698162 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 25 00:06:05.785003 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 25 00:06:06.363851 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 25 00:06:06.364200 systemd-timesyncd[1416]: Initial clock synchronization to Sat 2026-04-25 00:06:06.363734 UTC. Apr 25 00:06:06.364469 systemd-resolved[1341]: Clock change detected. Flushing caches. Apr 25 00:06:06.366789 systemd[1]: Reached target time-set.target - System Time Set. Apr 25 00:06:06.377176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 25 00:06:06.387629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 25 00:06:06.400931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:06:06.403590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 25 00:06:06.408129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:06:06.408777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:06:06.437446 kernel: mousedev: PS/2 mouse device common for all mice Apr 25 00:06:06.440593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:06:06.505292 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 25 00:06:06.516995 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 25 00:06:06.524817 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 00:06:06.533591 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:06:06.568983 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 25 00:06:06.571475 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:06:06.573067 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 00:06:06.574670 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 25 00:06:06.576383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 25 00:06:06.578322 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 25 00:06:06.581501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 25 00:06:06.583594 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 25 00:06:06.585352 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 25 00:06:06.585558 systemd[1]: Reached target paths.target - Path Units. Apr 25 00:06:06.586857 systemd[1]: Reached target timers.target - Timer Units. Apr 25 00:06:06.589102 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 25 00:06:06.592016 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 25 00:06:06.609267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 25 00:06:06.611749 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 25 00:06:06.613732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 25 00:06:06.615311 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 00:06:06.616690 systemd[1]: Reached target basic.target - Basic System. Apr 25 00:06:06.618129 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 25 00:06:06.618842 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 25 00:06:06.620610 systemd[1]: Starting containerd.service - containerd container runtime... Apr 25 00:06:06.622851 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 00:06:06.623147 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 25 00:06:06.627494 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 25 00:06:06.629026 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 25 00:06:06.630459 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 25 00:06:06.632656 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 25 00:06:06.634345 jq[1450]: false Apr 25 00:06:06.636216 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 25 00:06:06.638332 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 25 00:06:06.642564 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 25 00:06:06.643148 extend-filesystems[1451]: Found loop3 Apr 25 00:06:06.643148 extend-filesystems[1451]: Found loop4 Apr 25 00:06:06.643148 extend-filesystems[1451]: Found loop5 Apr 25 00:06:06.643148 extend-filesystems[1451]: Found sr0 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda1 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda2 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda3 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found usr Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda4 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda6 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda7 Apr 25 00:06:06.644077 extend-filesystems[1451]: Found vda9 Apr 25 00:06:06.644077 extend-filesystems[1451]: Checking size of /dev/vda9 Apr 25 00:06:06.645012 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 25 00:06:06.650520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 25 00:06:06.650809 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 25 00:06:06.652649 systemd[1]: Starting update-engine.service - Update Engine... Apr 25 00:06:06.659389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 25 00:06:06.664342 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 25 00:06:06.667176 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 25 00:06:06.667290 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 25 00:06:06.668066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 25 00:06:06.668223 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 25 00:06:06.672421 jq[1463]: true Apr 25 00:06:06.672647 extend-filesystems[1451]: Resized partition /dev/vda9 Apr 25 00:06:06.674766 systemd[1]: motdgen.service: Deactivated successfully. Apr 25 00:06:06.674909 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 25 00:06:06.677774 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Apr 25 00:06:06.685710 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 25 00:06:06.684677 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 25 00:06:06.684238 dbus-daemon[1449]: [system] SELinux support is enabled Apr 25 00:06:06.685901 update_engine[1461]: I20260425 00:06:06.681983 1461 main.cc:92] Flatcar Update Engine starting Apr 25 00:06:06.691490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1375) Apr 25 00:06:06.691987 jq[1476]: true Apr 25 00:06:06.705967 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 25 00:06:06.712735 update_engine[1461]: I20260425 00:06:06.712682 1461 update_check_scheduler.cc:74] Next update check in 9m19s Apr 25 00:06:06.715452 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 25 00:06:06.717719 tar[1472]: linux-amd64/LICENSE Apr 25 00:06:06.723086 systemd[1]: Started update-engine.service - Update Engine. Apr 25 00:06:06.724972 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 25 00:06:06.724989 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 25 00:06:06.726909 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 25 00:06:06.726923 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 25 00:06:06.738779 tar[1472]: linux-amd64/helm Apr 25 00:06:06.739593 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Apr 25 00:06:06.741162 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 25 00:06:06.748114 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 25 00:06:06.748114 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 25 00:06:06.748114 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 25 00:06:06.743455 systemd-logind[1458]: New seat seat0. Apr 25 00:06:06.757990 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Apr 25 00:06:06.758048 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Apr 25 00:06:06.745689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 25 00:06:06.748266 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 25 00:06:06.748496 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 25 00:06:06.754820 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 25 00:06:06.759920 systemd[1]: Started systemd-logind.service - User Login Management. Apr 25 00:06:06.762047 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 25 00:06:07.042917 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 25 00:06:07.296139 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 25 00:06:07.490623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 25 00:06:07.501569 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 25 00:06:07.506777 systemd[1]: issuegen.service: Deactivated successfully. Apr 25 00:06:07.506998 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 25 00:06:07.509914 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 25 00:06:07.538376 containerd[1482]: time="2026-04-25T00:06:07.537613407Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 25 00:06:07.545237 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 25 00:06:07.551690 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 25 00:06:07.557566 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 25 00:06:07.559349 systemd[1]: Reached target getty.target - Login Prompts. Apr 25 00:06:07.564088 containerd[1482]: time="2026-04-25T00:06:07.564000735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.565850 containerd[1482]: time="2026-04-25T00:06:07.565820649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:06:07.565850 containerd[1482]: time="2026-04-25T00:06:07.565848929Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 25 00:06:07.565926 containerd[1482]: time="2026-04-25T00:06:07.565861777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 25 00:06:07.566035 containerd[1482]: time="2026-04-25T00:06:07.566018969Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 25 00:06:07.566065 containerd[1482]: time="2026-04-25T00:06:07.566052528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566185 containerd[1482]: time="2026-04-25T00:06:07.566168331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566214 containerd[1482]: time="2026-04-25T00:06:07.566185388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566357 containerd[1482]: time="2026-04-25T00:06:07.566339164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566374 containerd[1482]: time="2026-04-25T00:06:07.566357652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566374 containerd[1482]: time="2026-04-25T00:06:07.566368663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566435 containerd[1482]: time="2026-04-25T00:06:07.566375554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566502 containerd[1482]: time="2026-04-25T00:06:07.566486688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566789 containerd[1482]: time="2026-04-25T00:06:07.566759616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566877 containerd[1482]: time="2026-04-25T00:06:07.566858714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:06:07.566893 containerd[1482]: time="2026-04-25T00:06:07.566876301Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 25 00:06:07.566971 containerd[1482]: time="2026-04-25T00:06:07.566959668Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 25 00:06:07.567027 containerd[1482]: time="2026-04-25T00:06:07.567012652Z" level=info msg="metadata content store policy set" policy=shared Apr 25 00:06:07.571864 containerd[1482]: time="2026-04-25T00:06:07.571833415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 25 00:06:07.571925 containerd[1482]: time="2026-04-25T00:06:07.571900011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 25 00:06:07.571925 containerd[1482]: time="2026-04-25T00:06:07.571914487Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 25 00:06:07.571958 containerd[1482]: time="2026-04-25T00:06:07.571928051Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 25 00:06:07.571958 containerd[1482]: time="2026-04-25T00:06:07.571948520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 25 00:06:07.572076 containerd[1482]: time="2026-04-25T00:06:07.572044599Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 25 00:06:07.572460 containerd[1482]: time="2026-04-25T00:06:07.572441292Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 25 00:06:07.572630 containerd[1482]: time="2026-04-25T00:06:07.572603525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 25 00:06:07.572630 containerd[1482]: time="2026-04-25T00:06:07.572622172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 25 00:06:07.572681 containerd[1482]: time="2026-04-25T00:06:07.572634105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 25 00:06:07.572681 containerd[1482]: time="2026-04-25T00:06:07.572647008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572681 containerd[1482]: time="2026-04-25T00:06:07.572658747Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572717 containerd[1482]: time="2026-04-25T00:06:07.572683322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572717 containerd[1482]: time="2026-04-25T00:06:07.572696544Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572717 containerd[1482]: time="2026-04-25T00:06:07.572710268Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572767 containerd[1482]: time="2026-04-25T00:06:07.572722234Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572767 containerd[1482]: time="2026-04-25T00:06:07.572742863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572767 containerd[1482]: time="2026-04-25T00:06:07.572754463Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 25 00:06:07.572802 containerd[1482]: time="2026-04-25T00:06:07.572781901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572815 containerd[1482]: time="2026-04-25T00:06:07.572803342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572840 containerd[1482]: time="2026-04-25T00:06:07.572816393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572840 containerd[1482]: time="2026-04-25T00:06:07.572829212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572878 containerd[1482]: time="2026-04-25T00:06:07.572869114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572892 containerd[1482]: time="2026-04-25T00:06:07.572883742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572915 containerd[1482]: time="2026-04-25T00:06:07.572896188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572915 containerd[1482]: time="2026-04-25T00:06:07.572907919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572941 containerd[1482]: time="2026-04-25T00:06:07.572919469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572941 containerd[1482]: time="2026-04-25T00:06:07.572932551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572966 containerd[1482]: time="2026-04-25T00:06:07.572943620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572979 containerd[1482]: time="2026-04-25T00:06:07.572964210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.572979 containerd[1482]: time="2026-04-25T00:06:07.572975981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.573014 containerd[1482]: time="2026-04-25T00:06:07.572989703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 25 00:06:07.573064 containerd[1482]: time="2026-04-25T00:06:07.573040976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.573081 containerd[1482]: time="2026-04-25T00:06:07.573063647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.573081 containerd[1482]: time="2026-04-25T00:06:07.573073865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 25 00:06:07.573162 containerd[1482]: time="2026-04-25T00:06:07.573145922Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 25 00:06:07.573207 containerd[1482]: time="2026-04-25T00:06:07.573172522Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 25 00:06:07.573207 containerd[1482]: time="2026-04-25T00:06:07.573191563Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 25 00:06:07.573249 containerd[1482]: time="2026-04-25T00:06:07.573203438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 25 00:06:07.573249 containerd[1482]: time="2026-04-25T00:06:07.573212018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.573249 containerd[1482]: time="2026-04-25T00:06:07.573239330Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 25 00:06:07.573308 containerd[1482]: time="2026-04-25T00:06:07.573261416Z" level=info msg="NRI interface is disabled by configuration." Apr 25 00:06:07.573308 containerd[1482]: time="2026-04-25T00:06:07.573273160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 25 00:06:07.573901 containerd[1482]: time="2026-04-25T00:06:07.573861068Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 25 00:06:07.574028 containerd[1482]: time="2026-04-25T00:06:07.573905949Z" level=info msg="Connect containerd service" Apr 25 00:06:07.574028 containerd[1482]: time="2026-04-25T00:06:07.573957356Z" level=info msg="using legacy CRI server" Apr 25 00:06:07.574028 containerd[1482]: time="2026-04-25T00:06:07.573963657Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 25 00:06:07.574227 containerd[1482]: time="2026-04-25T00:06:07.574204195Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 25 00:06:07.574904 containerd[1482]: time="2026-04-25T00:06:07.574877253Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 00:06:07.575109 containerd[1482]: time="2026-04-25T00:06:07.575056688Z" level=info msg="Start subscribing containerd event" Apr 25 00:06:07.575260 containerd[1482]: time="2026-04-25T00:06:07.575192261Z" level=info msg="Start recovering state" Apr 25 00:06:07.575334 containerd[1482]: time="2026-04-25T00:06:07.575318557Z" level=info msg="Start event monitor" Apr 25 00:06:07.575351 containerd[1482]: time="2026-04-25T00:06:07.575334594Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 25 00:06:07.575351 containerd[1482]: time="2026-04-25T00:06:07.575346938Z" level=info msg="Start snapshots syncer" Apr 25 00:06:07.575424 containerd[1482]: time="2026-04-25T00:06:07.575368095Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 25 00:06:07.575424 containerd[1482]: time="2026-04-25T00:06:07.575369843Z" level=info msg="Start cni network conf syncer for default" Apr 25 00:06:07.575424 containerd[1482]: time="2026-04-25T00:06:07.575418411Z" level=info msg="Start streaming server" Apr 25 00:06:07.576143 containerd[1482]: time="2026-04-25T00:06:07.575509863Z" level=info msg="containerd successfully booted in 0.039369s" Apr 25 00:06:07.575562 systemd[1]: Started containerd.service - containerd container runtime. Apr 25 00:06:07.698149 systemd-networkd[1400]: eth0: Gained IPv6LL Apr 25 00:06:07.702784 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 25 00:06:07.705058 systemd[1]: Reached target network-online.target - Network is Online. Apr 25 00:06:07.714683 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 25 00:06:07.717863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:07.724505 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 25 00:06:07.737628 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 25 00:06:07.737828 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 25 00:06:07.740117 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 25 00:06:07.747293 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 25 00:06:07.823429 tar[1472]: linux-amd64/README.md Apr 25 00:06:07.833740 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 25 00:06:09.400495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:09.402871 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 25 00:06:09.410773 systemd[1]: Startup finished in 849ms (kernel) + 5.121s (initrd) + 5.062s (userspace) = 11.033s. Apr 25 00:06:09.412872 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:06:10.020349 kubelet[1561]: E0425 00:06:10.020175 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:06:10.022834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:06:10.022958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:06:10.023249 systemd[1]: kubelet.service: Consumed 2.120s CPU time. Apr 25 00:06:12.634560 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 25 00:06:12.635540 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:58040.service - OpenSSH per-connection server daemon (10.0.0.1:58040). Apr 25 00:06:12.693156 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 58040 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:12.695204 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:12.704593 systemd-logind[1458]: New session 1 of user core. Apr 25 00:06:12.705884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 25 00:06:12.720519 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 25 00:06:12.732150 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 25 00:06:12.734147 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 25 00:06:12.742453 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 25 00:06:12.813893 systemd[1579]: Queued start job for default target default.target. Apr 25 00:06:12.822375 systemd[1579]: Created slice app.slice - User Application Slice. Apr 25 00:06:12.822473 systemd[1579]: Reached target paths.target - Paths. Apr 25 00:06:12.822491 systemd[1579]: Reached target timers.target - Timers. Apr 25 00:06:12.823998 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 25 00:06:12.832966 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 25 00:06:12.833036 systemd[1579]: Reached target sockets.target - Sockets. Apr 25 00:06:12.833050 systemd[1579]: Reached target basic.target - Basic System. Apr 25 00:06:12.833082 systemd[1579]: Reached target default.target - Main User Target. Apr 25 00:06:12.833107 systemd[1579]: Startup finished in 85ms. Apr 25 00:06:12.833353 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 25 00:06:12.834624 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 25 00:06:12.900382 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:58044.service - OpenSSH per-connection server daemon (10.0.0.1:58044). Apr 25 00:06:12.939627 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 58044 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:12.941598 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:12.949895 systemd-logind[1458]: New session 2 of user core. Apr 25 00:06:12.958561 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 25 00:06:13.013867 sshd[1590]: pam_unix(sshd:session): session closed for user core Apr 25 00:06:13.024306 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:58044.service: Deactivated successfully. Apr 25 00:06:13.025387 systemd[1]: session-2.scope: Deactivated successfully. Apr 25 00:06:13.026339 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Apr 25 00:06:13.027271 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:58050.service - OpenSSH per-connection server daemon (10.0.0.1:58050). Apr 25 00:06:13.027955 systemd-logind[1458]: Removed session 2. Apr 25 00:06:13.057746 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:13.058929 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:13.063778 systemd-logind[1458]: New session 3 of user core. Apr 25 00:06:13.079560 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 25 00:06:13.129614 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 25 00:06:13.142508 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:58050.service: Deactivated successfully. Apr 25 00:06:13.143600 systemd[1]: session-3.scope: Deactivated successfully. Apr 25 00:06:13.144920 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Apr 25 00:06:13.147122 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:58062.service - OpenSSH per-connection server daemon (10.0.0.1:58062). Apr 25 00:06:13.147792 systemd-logind[1458]: Removed session 3. Apr 25 00:06:13.178498 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 58062 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:13.180215 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:13.185188 systemd-logind[1458]: New session 4 of user core. Apr 25 00:06:13.195432 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 25 00:06:13.262114 sshd[1604]: pam_unix(sshd:session): session closed for user core Apr 25 00:06:13.280030 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:58062.service: Deactivated successfully. Apr 25 00:06:13.281319 systemd[1]: session-4.scope: Deactivated successfully. Apr 25 00:06:13.282463 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Apr 25 00:06:13.283557 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:58070.service - OpenSSH per-connection server daemon (10.0.0.1:58070). Apr 25 00:06:13.284473 systemd-logind[1458]: Removed session 4. Apr 25 00:06:13.314928 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 58070 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:13.316038 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:13.319850 systemd-logind[1458]: New session 5 of user core. Apr 25 00:06:13.335692 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 25 00:06:13.396858 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 25 00:06:13.397089 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:06:13.416504 sudo[1614]: pam_unix(sudo:session): session closed for user root Apr 25 00:06:13.418830 sshd[1611]: pam_unix(sshd:session): session closed for user core Apr 25 00:06:13.430719 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:58070.service: Deactivated successfully. Apr 25 00:06:13.432083 systemd[1]: session-5.scope: Deactivated successfully. Apr 25 00:06:13.433229 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Apr 25 00:06:13.434437 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Apr 25 00:06:13.435199 systemd-logind[1458]: Removed session 5. Apr 25 00:06:13.471054 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:13.472549 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:13.476724 systemd-logind[1458]: New session 6 of user core. Apr 25 00:06:13.485866 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 25 00:06:13.543606 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 25 00:06:13.543808 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:06:13.550658 sudo[1623]: pam_unix(sudo:session): session closed for user root Apr 25 00:06:13.557379 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 25 00:06:13.558124 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:06:13.581765 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 25 00:06:13.583119 auditctl[1626]: No rules Apr 25 00:06:13.583925 systemd[1]: audit-rules.service: Deactivated successfully. Apr 25 00:06:13.584100 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 25 00:06:13.585472 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 00:06:13.623465 augenrules[1644]: No rules Apr 25 00:06:13.625474 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 00:06:13.626865 sudo[1622]: pam_unix(sudo:session): session closed for user root Apr 25 00:06:13.630221 sshd[1619]: pam_unix(sshd:session): session closed for user core Apr 25 00:06:13.640024 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:58076.service: Deactivated successfully. Apr 25 00:06:13.642211 systemd[1]: session-6.scope: Deactivated successfully. Apr 25 00:06:13.645900 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Apr 25 00:06:13.658077 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:58092.service - OpenSSH per-connection server daemon (10.0.0.1:58092). Apr 25 00:06:13.658869 systemd-logind[1458]: Removed session 6. Apr 25 00:06:13.688431 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 58092 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:06:13.689946 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:06:13.696219 systemd-logind[1458]: New session 7 of user core. Apr 25 00:06:13.705597 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 25 00:06:13.760013 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 25 00:06:13.760319 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:06:14.830195 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 25 00:06:14.830925 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 25 00:06:15.837040 dockerd[1675]: time="2026-04-25T00:06:15.836813951Z" level=info msg="Starting up" Apr 25 00:06:15.997859 dockerd[1675]: time="2026-04-25T00:06:15.997769059Z" level=info msg="Loading containers: start." Apr 25 00:06:16.115437 kernel: Initializing XFRM netlink socket Apr 25 00:06:16.198644 systemd-networkd[1400]: docker0: Link UP Apr 25 00:06:16.225656 dockerd[1675]: time="2026-04-25T00:06:16.225514980Z" level=info msg="Loading containers: done." Apr 25 00:06:16.264118 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2162334132-merged.mount: Deactivated successfully. Apr 25 00:06:16.265831 dockerd[1675]: time="2026-04-25T00:06:16.265184371Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 25 00:06:16.266306 dockerd[1675]: time="2026-04-25T00:06:16.266278747Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 25 00:06:16.266485 dockerd[1675]: time="2026-04-25T00:06:16.266456396Z" level=info msg="Daemon has completed initialization" Apr 25 00:06:16.320925 dockerd[1675]: time="2026-04-25T00:06:16.320710570Z" level=info msg="API listen on /run/docker.sock" Apr 25 00:06:16.321458 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 25 00:06:17.191515 containerd[1482]: time="2026-04-25T00:06:17.191429630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 25 00:06:17.698129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411281027.mount: Deactivated successfully. Apr 25 00:06:18.852582 containerd[1482]: time="2026-04-25T00:06:18.852431628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:18.853705 containerd[1482]: time="2026-04-25T00:06:18.853414717Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 25 00:06:18.854969 containerd[1482]: time="2026-04-25T00:06:18.854901257Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:18.858594 containerd[1482]: time="2026-04-25T00:06:18.858558081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:18.859676 containerd[1482]: time="2026-04-25T00:06:18.859637228Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.668048739s" Apr 25 00:06:18.859716 containerd[1482]: time="2026-04-25T00:06:18.859688755Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 25 00:06:18.861480 containerd[1482]: time="2026-04-25T00:06:18.861446459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 25 00:06:19.957167 containerd[1482]: time="2026-04-25T00:06:19.956956872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:19.958043 containerd[1482]: time="2026-04-25T00:06:19.957204452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 25 00:06:19.961283 containerd[1482]: time="2026-04-25T00:06:19.961189067Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:19.965663 containerd[1482]: time="2026-04-25T00:06:19.965635666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:19.966761 containerd[1482]: time="2026-04-25T00:06:19.966725472Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.1052427s" Apr 25 00:06:19.966813 containerd[1482]: time="2026-04-25T00:06:19.966763204Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 25 00:06:19.967865 containerd[1482]: time="2026-04-25T00:06:19.967815913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 25 00:06:20.397013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 25 00:06:20.403836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:20.888531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:20.897639 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:06:20.977360 kubelet[1896]: E0425 00:06:20.976885 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:06:20.979779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:06:20.979939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:06:21.048640 containerd[1482]: time="2026-04-25T00:06:21.048504678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:21.049247 containerd[1482]: time="2026-04-25T00:06:21.049209371Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 25 00:06:21.050241 containerd[1482]: time="2026-04-25T00:06:21.050191266Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:21.052732 containerd[1482]: time="2026-04-25T00:06:21.052693754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:21.053664 containerd[1482]: time="2026-04-25T00:06:21.053613333Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.085761884s" Apr 25 00:06:21.053698 containerd[1482]: time="2026-04-25T00:06:21.053661941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 25 00:06:21.054667 containerd[1482]: time="2026-04-25T00:06:21.054642056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 25 00:06:22.317361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532666696.mount: Deactivated successfully. Apr 25 00:06:22.731112 containerd[1482]: time="2026-04-25T00:06:22.730839382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:22.731604 containerd[1482]: time="2026-04-25T00:06:22.731567233Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 25 00:06:22.732444 containerd[1482]: time="2026-04-25T00:06:22.732371060Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:22.734260 containerd[1482]: time="2026-04-25T00:06:22.734234843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:22.734821 containerd[1482]: time="2026-04-25T00:06:22.734777035Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.680101531s" Apr 25 00:06:22.734875 containerd[1482]: time="2026-04-25T00:06:22.734822006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 25 00:06:22.736092 containerd[1482]: time="2026-04-25T00:06:22.736067730Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 25 00:06:23.109169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791267437.mount: Deactivated successfully. Apr 25 00:06:24.326352 containerd[1482]: time="2026-04-25T00:06:24.326150283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:24.326352 containerd[1482]: time="2026-04-25T00:06:24.326298839Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 25 00:06:24.328224 containerd[1482]: time="2026-04-25T00:06:24.328180212Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:24.332263 containerd[1482]: time="2026-04-25T00:06:24.332141290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:24.335460 containerd[1482]: time="2026-04-25T00:06:24.335354170Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.599250772s" Apr 25 00:06:24.335460 containerd[1482]: time="2026-04-25T00:06:24.335427438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 25 00:06:24.336564 containerd[1482]: time="2026-04-25T00:06:24.336534857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 25 00:06:24.817357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910038142.mount: Deactivated successfully. Apr 25 00:06:24.828080 containerd[1482]: time="2026-04-25T00:06:24.827983020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:24.828981 containerd[1482]: time="2026-04-25T00:06:24.828904408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 25 00:06:24.831705 containerd[1482]: time="2026-04-25T00:06:24.831592703Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:24.835793 containerd[1482]: time="2026-04-25T00:06:24.835605655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:24.839907 containerd[1482]: time="2026-04-25T00:06:24.839781722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 503.191115ms" Apr 25 00:06:24.839907 containerd[1482]: time="2026-04-25T00:06:24.839842444Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 25 00:06:24.841135 containerd[1482]: time="2026-04-25T00:06:24.841055242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 25 00:06:25.294922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819266606.mount: Deactivated successfully. Apr 25 00:06:26.742696 containerd[1482]: time="2026-04-25T00:06:26.742523533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:26.743479 containerd[1482]: time="2026-04-25T00:06:26.743439342Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 25 00:06:26.744710 containerd[1482]: time="2026-04-25T00:06:26.744675739Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:26.747682 containerd[1482]: time="2026-04-25T00:06:26.747634683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:26.748559 containerd[1482]: time="2026-04-25T00:06:26.748521098Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.907408146s" Apr 25 00:06:26.748559 containerd[1482]: time="2026-04-25T00:06:26.748555523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 25 00:06:27.855034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:27.864689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:27.885729 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit session-7.scope)... Apr 25 00:06:27.885754 systemd[1]: Reloading... Apr 25 00:06:27.953484 zram_generator::config[2104]: No configuration found. Apr 25 00:06:28.218090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:06:28.263342 systemd[1]: Reloading finished in 377 ms. Apr 25 00:06:28.298770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:28.300742 systemd[1]: kubelet.service: Deactivated successfully. Apr 25 00:06:28.300919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:28.302069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:28.433073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:28.450202 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 00:06:28.566104 kubelet[2154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:06:28.696638 kubelet[2154]: I0425 00:06:28.696515 2154 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 25 00:06:28.696638 kubelet[2154]: I0425 00:06:28.696560 2154 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 00:06:28.696638 kubelet[2154]: I0425 00:06:28.696588 2154 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 25 00:06:28.696638 kubelet[2154]: I0425 00:06:28.696593 2154 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 00:06:28.697237 kubelet[2154]: I0425 00:06:28.697062 2154 server.go:951] "Client rotation is on, will bootstrap in background" Apr 25 00:06:28.728706 kubelet[2154]: E0425 00:06:28.728556 2154 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 25 00:06:28.729730 kubelet[2154]: I0425 00:06:28.729704 2154 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 00:06:28.735948 kubelet[2154]: E0425 00:06:28.735895 2154 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 00:06:28.736036 kubelet[2154]: I0425 00:06:28.735956 2154 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 25 00:06:28.740209 kubelet[2154]: I0425 00:06:28.740195 2154 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 25 00:06:28.741415 kubelet[2154]: I0425 00:06:28.741346 2154 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 00:06:28.741718 kubelet[2154]: I0425 00:06:28.741388 2154 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 25 00:06:28.741873 kubelet[2154]: I0425 00:06:28.741723 2154 topology_manager.go:143] "Creating topology manager with none policy" Apr 25 00:06:28.741873 kubelet[2154]: I0425 00:06:28.741730 2154 container_manager_linux.go:308] "Creating device plugin manager" Apr 25 00:06:28.741873 kubelet[2154]: I0425 00:06:28.741826 2154 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 25 00:06:28.743938 kubelet[2154]: I0425 00:06:28.743894 2154 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 25 00:06:28.744188 kubelet[2154]: I0425 00:06:28.744174 2154 kubelet.go:482] "Attempting to sync node with API server" Apr 25 00:06:28.744211 kubelet[2154]: I0425 00:06:28.744199 2154 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 00:06:28.744256 kubelet[2154]: I0425 00:06:28.744251 2154 kubelet.go:394] "Adding apiserver pod source" Apr 25 00:06:28.744294 kubelet[2154]: I0425 00:06:28.744275 2154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 00:06:28.747067 kubelet[2154]: I0425 00:06:28.747048 2154 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 00:06:28.749290 kubelet[2154]: I0425 00:06:28.749175 2154 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 00:06:28.749290 kubelet[2154]: I0425 00:06:28.749216 2154 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 25 00:06:28.749444 kubelet[2154]: W0425 00:06:28.749390 2154 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 25 00:06:28.756038 kubelet[2154]: I0425 00:06:28.756016 2154 server.go:1257] "Started kubelet" Apr 25 00:06:28.758882 kubelet[2154]: I0425 00:06:28.758797 2154 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 00:06:28.758952 kubelet[2154]: I0425 00:06:28.758880 2154 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 00:06:28.759063 kubelet[2154]: I0425 00:06:28.759047 2154 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 25 00:06:28.759321 kubelet[2154]: I0425 00:06:28.759303 2154 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 00:06:28.761438 kubelet[2154]: I0425 00:06:28.759919 2154 server.go:317] "Adding debug handlers to kubelet server" Apr 25 00:06:28.761438 kubelet[2154]: I0425 00:06:28.761376 2154 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 25 00:06:28.761985 kubelet[2154]: I0425 00:06:28.761924 2154 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 00:06:28.764110 kubelet[2154]: E0425 00:06:28.763498 2154 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 25 00:06:28.764110 kubelet[2154]: I0425 00:06:28.763557 2154 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 25 00:06:28.764110 kubelet[2154]: I0425 00:06:28.763728 2154 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 25 00:06:28.764110 kubelet[2154]: I0425 00:06:28.763797 2154 reconciler.go:29] "Reconciler: start to sync state" Apr 25 00:06:28.764110 kubelet[2154]: E0425 00:06:28.764072 2154 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Apr 25 00:06:28.765520 kubelet[2154]: I0425 00:06:28.764859 2154 factory.go:223] Registration of the systemd container factory successfully Apr 25 00:06:28.765520 kubelet[2154]: I0425 00:06:28.764919 2154 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 00:06:28.765714 kubelet[2154]: I0425 00:06:28.765689 2154 factory.go:223] Registration of the containerd container factory successfully Apr 25 00:06:28.767377 kubelet[2154]: E0425 00:06:28.765947 2154 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a970d526d0ede4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-25 00:06:28.755942884 +0000 UTC m=+0.282121697,LastTimestamp:2026-04-25 00:06:28.755942884 +0000 UTC m=+0.282121697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 25 00:06:28.767561 kubelet[2154]: E0425 00:06:28.767381 2154 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 00:06:28.778936 kubelet[2154]: I0425 00:06:28.778895 2154 cpu_manager.go:225] "Starting" policy="none" Apr 25 00:06:28.778936 kubelet[2154]: I0425 00:06:28.778916 2154 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 25 00:06:28.778936 kubelet[2154]: I0425 00:06:28.778936 2154 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 25 00:06:28.780607 kubelet[2154]: I0425 00:06:28.780568 2154 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 25 00:06:28.781618 kubelet[2154]: I0425 00:06:28.781597 2154 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 25 00:06:28.781694 kubelet[2154]: I0425 00:06:28.781681 2154 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 25 00:06:28.781878 kubelet[2154]: I0425 00:06:28.781739 2154 kubelet.go:2501] "Starting kubelet main sync loop" Apr 25 00:06:28.781878 kubelet[2154]: E0425 00:06:28.781819 2154 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 00:06:28.782453 kubelet[2154]: I0425 00:06:28.782426 2154 policy_none.go:50] "Start" Apr 25 00:06:28.782514 kubelet[2154]: I0425 00:06:28.782475 2154 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 25 00:06:28.782514 kubelet[2154]: I0425 00:06:28.782498 2154 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 25 00:06:28.785342 kubelet[2154]: I0425 00:06:28.785307 2154 policy_none.go:44] "Start" Apr 25 00:06:28.793929 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 25 00:06:28.819534 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 25 00:06:28.822804 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 25 00:06:28.833380 kubelet[2154]: E0425 00:06:28.833329 2154 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 00:06:28.834059 kubelet[2154]: I0425 00:06:28.833991 2154 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 25 00:06:28.834059 kubelet[2154]: I0425 00:06:28.834031 2154 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 00:06:28.835977 kubelet[2154]: I0425 00:06:28.835505 2154 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 25 00:06:28.836678 kubelet[2154]: E0425 00:06:28.836643 2154 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 00:06:28.836910 kubelet[2154]: E0425 00:06:28.836752 2154 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 25 00:06:28.896291 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 25 00:06:28.906528 kubelet[2154]: E0425 00:06:28.906491 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:28.907507 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 25 00:06:28.916390 kubelet[2154]: E0425 00:06:28.916355 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:28.919136 systemd[1]: Created slice kubepods-burstable-pod19612a347cd73d0ede3a9bcf8e1d15ea.slice - libcontainer container kubepods-burstable-pod19612a347cd73d0ede3a9bcf8e1d15ea.slice. Apr 25 00:06:28.920579 kubelet[2154]: E0425 00:06:28.920544 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:28.939124 kubelet[2154]: I0425 00:06:28.939043 2154 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 25 00:06:28.939490 kubelet[2154]: E0425 00:06:28.939471 2154 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Apr 25 00:06:28.965714 kubelet[2154]: E0425 00:06:28.965608 2154 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Apr 25 00:06:29.065998 kubelet[2154]: I0425 00:06:29.065802 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:29.065998 kubelet[2154]: I0425 00:06:29.065942 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:29.065998 kubelet[2154]: I0425 00:06:29.065988 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:29.065998 kubelet[2154]: I0425 00:06:29.066042 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19612a347cd73d0ede3a9bcf8e1d15ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19612a347cd73d0ede3a9bcf8e1d15ea\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:29.065998 kubelet[2154]: I0425 00:06:29.066057 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19612a347cd73d0ede3a9bcf8e1d15ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19612a347cd73d0ede3a9bcf8e1d15ea\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:29.066764 kubelet[2154]: I0425 00:06:29.066070 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:29.066764 kubelet[2154]: I0425 00:06:29.066081 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:29.066764 kubelet[2154]: I0425 00:06:29.066124 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:29.066764 kubelet[2154]: I0425 00:06:29.066135 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19612a347cd73d0ede3a9bcf8e1d15ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19612a347cd73d0ede3a9bcf8e1d15ea\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:29.147429 kubelet[2154]: I0425 00:06:29.147018 2154 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 25 00:06:29.147429 kubelet[2154]: E0425 00:06:29.147365 2154 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Apr 25 00:06:29.239375 kubelet[2154]: E0425 00:06:29.239181 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:29.243364 kubelet[2154]: E0425 00:06:29.243261 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:29.243695 containerd[1482]: time="2026-04-25T00:06:29.243221039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 25 00:06:29.244018 containerd[1482]: time="2026-04-25T00:06:29.243957054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 25 00:06:29.244842 kubelet[2154]: E0425 00:06:29.244786 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:29.245268 containerd[1482]: time="2026-04-25T00:06:29.245220572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19612a347cd73d0ede3a9bcf8e1d15ea,Namespace:kube-system,Attempt:0,}" Apr 25 00:06:29.367429 kubelet[2154]: E0425 00:06:29.367275 2154 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Apr 25 00:06:29.552030 kubelet[2154]: I0425 00:06:29.551805 2154 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 25 00:06:29.552298 kubelet[2154]: E0425 00:06:29.552214 2154 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Apr 25 00:06:29.661802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567601765.mount: Deactivated successfully. Apr 25 00:06:29.676134 containerd[1482]: time="2026-04-25T00:06:29.675993657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:06:29.676612 containerd[1482]: time="2026-04-25T00:06:29.676339735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 25 00:06:29.680248 containerd[1482]: time="2026-04-25T00:06:29.680158800Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:06:29.680970 containerd[1482]: time="2026-04-25T00:06:29.680929538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:06:29.681557 containerd[1482]: time="2026-04-25T00:06:29.681516983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 00:06:29.682097 containerd[1482]: time="2026-04-25T00:06:29.682022745Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:06:29.682299 containerd[1482]: time="2026-04-25T00:06:29.682265784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 00:06:29.683760 containerd[1482]: time="2026-04-25T00:06:29.683728537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:06:29.685304 containerd[1482]: time="2026-04-25T00:06:29.685263432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 441.832483ms" Apr 25 00:06:29.685728 containerd[1482]: time="2026-04-25T00:06:29.685703132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 440.351294ms" Apr 25 00:06:29.686989 containerd[1482]: time="2026-04-25T00:06:29.686960826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.904569ms" Apr 25 00:06:30.092500 containerd[1482]: time="2026-04-25T00:06:30.092163877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:30.092500 containerd[1482]: time="2026-04-25T00:06:30.092207157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:30.092500 containerd[1482]: time="2026-04-25T00:06:30.092218596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:30.092500 containerd[1482]: time="2026-04-25T00:06:30.092302742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:30.096955 containerd[1482]: time="2026-04-25T00:06:30.096822490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:30.096955 containerd[1482]: time="2026-04-25T00:06:30.096885672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:30.096955 containerd[1482]: time="2026-04-25T00:06:30.096899138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:30.097096 containerd[1482]: time="2026-04-25T00:06:30.096965651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:30.102023 containerd[1482]: time="2026-04-25T00:06:30.100537106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:30.102023 containerd[1482]: time="2026-04-25T00:06:30.100600102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:30.102023 containerd[1482]: time="2026-04-25T00:06:30.100611699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:30.102023 containerd[1482]: time="2026-04-25T00:06:30.100788374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:30.125617 systemd[1]: Started cri-containerd-7f687044c66312a71c1efd17a1fec53b61b1c9a8c81bc420a6df4ca6cb64ac34.scope - libcontainer container 7f687044c66312a71c1efd17a1fec53b61b1c9a8c81bc420a6df4ca6cb64ac34. Apr 25 00:06:30.133209 systemd[1]: Started cri-containerd-fafb26309d3e63594f116e8472d14f0e10c6e7773aef52a6c7cd74056e648a51.scope - libcontainer container fafb26309d3e63594f116e8472d14f0e10c6e7773aef52a6c7cd74056e648a51. Apr 25 00:06:30.218425 kubelet[2154]: E0425 00:06:30.217765 2154 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" Apr 25 00:06:30.228563 systemd[1]: Started cri-containerd-3c3c91c4b4ce1f7799a502e207211add0c6e2767c808ccce18b9458cb9d71e41.scope - libcontainer container 3c3c91c4b4ce1f7799a502e207211add0c6e2767c808ccce18b9458cb9d71e41. Apr 25 00:06:30.269050 containerd[1482]: time="2026-04-25T00:06:30.268868107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f687044c66312a71c1efd17a1fec53b61b1c9a8c81bc420a6df4ca6cb64ac34\"" Apr 25 00:06:30.271593 containerd[1482]: time="2026-04-25T00:06:30.271266624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19612a347cd73d0ede3a9bcf8e1d15ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"fafb26309d3e63594f116e8472d14f0e10c6e7773aef52a6c7cd74056e648a51\"" Apr 25 00:06:30.271661 kubelet[2154]: E0425 00:06:30.271300 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:30.272642 kubelet[2154]: E0425 00:06:30.272185 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:30.278605 containerd[1482]: time="2026-04-25T00:06:30.278586086Z" level=info msg="CreateContainer within sandbox \"7f687044c66312a71c1efd17a1fec53b61b1c9a8c81bc420a6df4ca6cb64ac34\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 25 00:06:30.279273 containerd[1482]: time="2026-04-25T00:06:30.279258067Z" level=info msg="CreateContainer within sandbox \"fafb26309d3e63594f116e8472d14f0e10c6e7773aef52a6c7cd74056e648a51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 25 00:06:30.281858 containerd[1482]: time="2026-04-25T00:06:30.281804332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c3c91c4b4ce1f7799a502e207211add0c6e2767c808ccce18b9458cb9d71e41\"" Apr 25 00:06:30.282665 kubelet[2154]: E0425 00:06:30.282633 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:30.286409 containerd[1482]: time="2026-04-25T00:06:30.286367527Z" level=info msg="CreateContainer within sandbox \"3c3c91c4b4ce1f7799a502e207211add0c6e2767c808ccce18b9458cb9d71e41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 25 00:06:30.300935 containerd[1482]: time="2026-04-25T00:06:30.300817895Z" level=info msg="CreateContainer within sandbox \"7f687044c66312a71c1efd17a1fec53b61b1c9a8c81bc420a6df4ca6cb64ac34\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"516376f0e7a0f87cf0dc5816871727cb2b1132deada496ce56cbc9fc07e43a59\"" Apr 25 00:06:30.303067 containerd[1482]: time="2026-04-25T00:06:30.303035007Z" level=info msg="StartContainer for \"516376f0e7a0f87cf0dc5816871727cb2b1132deada496ce56cbc9fc07e43a59\"" Apr 25 00:06:30.305556 containerd[1482]: time="2026-04-25T00:06:30.305352537Z" level=info msg="CreateContainer within sandbox \"fafb26309d3e63594f116e8472d14f0e10c6e7773aef52a6c7cd74056e648a51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9067118720bbf4791167684d7745fd566a66235f3faa2ada280f8622d8a11e6\"" Apr 25 00:06:30.308244 containerd[1482]: time="2026-04-25T00:06:30.308148315Z" level=info msg="StartContainer for \"e9067118720bbf4791167684d7745fd566a66235f3faa2ada280f8622d8a11e6\"" Apr 25 00:06:30.312378 containerd[1482]: time="2026-04-25T00:06:30.312105969Z" level=info msg="CreateContainer within sandbox \"3c3c91c4b4ce1f7799a502e207211add0c6e2767c808ccce18b9458cb9d71e41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dd7448335e472711017ddc7cbd878a63792e96fbc1c4e176183619116179a888\"" Apr 25 00:06:30.313486 containerd[1482]: time="2026-04-25T00:06:30.313469669Z" level=info msg="StartContainer for \"dd7448335e472711017ddc7cbd878a63792e96fbc1c4e176183619116179a888\"" Apr 25 00:06:30.335592 systemd[1]: Started cri-containerd-e9067118720bbf4791167684d7745fd566a66235f3faa2ada280f8622d8a11e6.scope - libcontainer container e9067118720bbf4791167684d7745fd566a66235f3faa2ada280f8622d8a11e6. Apr 25 00:06:30.338865 systemd[1]: Started cri-containerd-516376f0e7a0f87cf0dc5816871727cb2b1132deada496ce56cbc9fc07e43a59.scope - libcontainer container 516376f0e7a0f87cf0dc5816871727cb2b1132deada496ce56cbc9fc07e43a59. Apr 25 00:06:30.339986 systemd[1]: Started cri-containerd-dd7448335e472711017ddc7cbd878a63792e96fbc1c4e176183619116179a888.scope - libcontainer container dd7448335e472711017ddc7cbd878a63792e96fbc1c4e176183619116179a888. Apr 25 00:06:30.355795 kubelet[2154]: I0425 00:06:30.355725 2154 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 25 00:06:30.360388 kubelet[2154]: E0425 00:06:30.360271 2154 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Apr 25 00:06:30.385375 containerd[1482]: time="2026-04-25T00:06:30.385331526Z" level=info msg="StartContainer for \"516376f0e7a0f87cf0dc5816871727cb2b1132deada496ce56cbc9fc07e43a59\" returns successfully" Apr 25 00:06:30.392242 containerd[1482]: time="2026-04-25T00:06:30.391350652Z" level=info msg="StartContainer for \"e9067118720bbf4791167684d7745fd566a66235f3faa2ada280f8622d8a11e6\" returns successfully" Apr 25 00:06:30.400612 containerd[1482]: time="2026-04-25T00:06:30.400557604Z" level=info msg="StartContainer for \"dd7448335e472711017ddc7cbd878a63792e96fbc1c4e176183619116179a888\" returns successfully" Apr 25 00:06:31.111482 kubelet[2154]: E0425 00:06:31.111341 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:31.111482 kubelet[2154]: E0425 00:06:31.111532 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:31.111797 kubelet[2154]: E0425 00:06:31.111612 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:31.111797 kubelet[2154]: E0425 00:06:31.111697 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:31.114121 kubelet[2154]: E0425 00:06:31.113815 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:31.114427 kubelet[2154]: E0425 00:06:31.114346 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:32.118085 kubelet[2154]: I0425 00:06:32.118004 2154 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 25 00:06:32.120987 kubelet[2154]: E0425 00:06:32.120963 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:32.121457 kubelet[2154]: E0425 00:06:32.121438 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:32.122086 kubelet[2154]: E0425 00:06:32.122071 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:32.122476 kubelet[2154]: E0425 00:06:32.122200 2154 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:06:32.122476 kubelet[2154]: E0425 00:06:32.122323 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:32.122756 kubelet[2154]: E0425 00:06:32.122743 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:32.895739 kubelet[2154]: E0425 00:06:32.895627 2154 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 25 00:06:32.961619 kubelet[2154]: I0425 00:06:32.961536 2154 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 25 00:06:32.963952 kubelet[2154]: I0425 00:06:32.963917 2154 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:33.041464 kubelet[2154]: E0425 00:06:33.038557 2154 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:33.041464 kubelet[2154]: I0425 00:06:33.038661 2154 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:33.041464 kubelet[2154]: E0425 00:06:33.041321 2154 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:33.041464 kubelet[2154]: I0425 00:06:33.041340 2154 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:33.045945 kubelet[2154]: E0425 00:06:33.045873 2154 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:33.125457 kubelet[2154]: I0425 00:06:33.125353 2154 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:33.127335 kubelet[2154]: E0425 00:06:33.127310 2154 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:33.127586 kubelet[2154]: E0425 00:06:33.127571 2154 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:33.758022 kubelet[2154]: I0425 00:06:33.757829 2154 apiserver.go:52] "Watching apiserver" Apr 25 00:06:33.764634 kubelet[2154]: I0425 00:06:33.764588 2154 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 25 00:06:34.910492 systemd[1]: Reloading requested from client PID 2443 ('systemctl') (unit session-7.scope)... Apr 25 00:06:34.910514 systemd[1]: Reloading... Apr 25 00:06:34.977537 zram_generator::config[2478]: No configuration found. Apr 25 00:06:35.069105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:06:35.128083 systemd[1]: Reloading finished in 217 ms. Apr 25 00:06:35.167162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:35.188128 systemd[1]: kubelet.service: Deactivated successfully. Apr 25 00:06:35.188426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:35.188473 systemd[1]: kubelet.service: Consumed 1.905s CPU time, 126.9M memory peak, 0B memory swap peak. Apr 25 00:06:35.201663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:06:35.314729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:06:35.318028 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 00:06:35.374031 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:06:35.380600 kubelet[2527]: I0425 00:06:35.380550 2527 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 25 00:06:35.380600 kubelet[2527]: I0425 00:06:35.380587 2527 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 00:06:35.380600 kubelet[2527]: I0425 00:06:35.380601 2527 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 25 00:06:35.380600 kubelet[2527]: I0425 00:06:35.380604 2527 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 00:06:35.380867 kubelet[2527]: I0425 00:06:35.380850 2527 server.go:951] "Client rotation is on, will bootstrap in background" Apr 25 00:06:35.381867 kubelet[2527]: I0425 00:06:35.381842 2527 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 25 00:06:35.386484 kubelet[2527]: I0425 00:06:35.386339 2527 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 00:06:35.389147 kubelet[2527]: E0425 00:06:35.389117 2527 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 00:06:35.389213 kubelet[2527]: I0425 00:06:35.389157 2527 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 25 00:06:35.395027 kubelet[2527]: I0425 00:06:35.395003 2527 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 25 00:06:35.395267 kubelet[2527]: I0425 00:06:35.395221 2527 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 00:06:35.395387 kubelet[2527]: I0425 00:06:35.395247 2527 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 25 00:06:35.395387 kubelet[2527]: I0425 00:06:35.395375 2527 topology_manager.go:143] "Creating topology manager with none policy" Apr 25 00:06:35.395387 kubelet[2527]: I0425 00:06:35.395381 2527 container_manager_linux.go:308] "Creating device plugin manager" Apr 25 00:06:35.395526 kubelet[2527]: I0425 00:06:35.395423 2527 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 25 00:06:35.395592 kubelet[2527]: I0425 00:06:35.395577 2527 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 25 00:06:35.395761 kubelet[2527]: I0425 00:06:35.395737 2527 kubelet.go:482] "Attempting to sync node with API server" Apr 25 00:06:35.395783 kubelet[2527]: I0425 00:06:35.395774 2527 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 00:06:35.395821 kubelet[2527]: I0425 00:06:35.395809 2527 kubelet.go:394] "Adding apiserver pod source" Apr 25 00:06:35.395841 kubelet[2527]: I0425 00:06:35.395824 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 00:06:35.399660 kubelet[2527]: I0425 00:06:35.398822 2527 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 00:06:35.399996 kubelet[2527]: I0425 00:06:35.399672 2527 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 00:06:35.399996 kubelet[2527]: I0425 00:06:35.399694 2527 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 25 00:06:35.402839 kubelet[2527]: I0425 00:06:35.402642 2527 server.go:1257] "Started kubelet" Apr 25 00:06:35.403205 kubelet[2527]: I0425 00:06:35.403160 2527 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 00:06:35.403314 kubelet[2527]: I0425 00:06:35.403253 2527 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 00:06:35.403476 kubelet[2527]: I0425 00:06:35.403445 2527 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 25 00:06:35.403693 kubelet[2527]: I0425 00:06:35.403661 2527 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 00:06:35.404007 kubelet[2527]: I0425 00:06:35.403990 2527 server.go:317] "Adding debug handlers to kubelet server" Apr 25 00:06:35.406281 kubelet[2527]: I0425 00:06:35.406256 2527 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 25 00:06:35.406535 kubelet[2527]: I0425 00:06:35.406524 2527 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 00:06:35.406883 kubelet[2527]: I0425 00:06:35.406851 2527 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 25 00:06:35.407012 kubelet[2527]: I0425 00:06:35.406981 2527 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 25 00:06:35.407265 kubelet[2527]: I0425 00:06:35.407240 2527 reconciler.go:29] "Reconciler: start to sync state" Apr 25 00:06:35.437094 kubelet[2527]: E0425 00:06:35.436923 2527 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 00:06:35.438956 kubelet[2527]: I0425 00:06:35.438926 2527 factory.go:223] Registration of the systemd container factory successfully Apr 25 00:06:35.439088 kubelet[2527]: I0425 00:06:35.439059 2527 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 00:06:35.446723 kubelet[2527]: I0425 00:06:35.446627 2527 factory.go:223] Registration of the containerd container factory successfully Apr 25 00:06:35.452330 kubelet[2527]: I0425 00:06:35.452287 2527 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 25 00:06:35.453348 kubelet[2527]: I0425 00:06:35.453315 2527 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 25 00:06:35.453428 kubelet[2527]: I0425 00:06:35.453358 2527 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 25 00:06:35.453428 kubelet[2527]: I0425 00:06:35.453380 2527 kubelet.go:2501] "Starting kubelet main sync loop" Apr 25 00:06:35.453895 kubelet[2527]: E0425 00:06:35.453847 2527 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 00:06:35.483813 kubelet[2527]: I0425 00:06:35.483765 2527 cpu_manager.go:225] "Starting" policy="none" Apr 25 00:06:35.483813 kubelet[2527]: I0425 00:06:35.483786 2527 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 25 00:06:35.483813 kubelet[2527]: I0425 00:06:35.483802 2527 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 25 00:06:35.483813 kubelet[2527]: I0425 00:06:35.483905 2527 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 25 00:06:35.484163 kubelet[2527]: I0425 00:06:35.483913 2527 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 25 00:06:35.484163 kubelet[2527]: I0425 00:06:35.483956 2527 policy_none.go:50] "Start" Apr 25 00:06:35.484163 kubelet[2527]: I0425 00:06:35.484016 2527 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 25 00:06:35.484163 kubelet[2527]: I0425 00:06:35.484024 2527 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 25 00:06:35.484163 kubelet[2527]: I0425 00:06:35.484118 2527 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 25 00:06:35.484163 kubelet[2527]: I0425 00:06:35.484123 2527 policy_none.go:44] "Start" Apr 25 00:06:35.493555 kubelet[2527]: E0425 00:06:35.493503 2527 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 00:06:35.493893 kubelet[2527]: I0425 00:06:35.493863 2527 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 25 00:06:35.493942 kubelet[2527]: I0425 00:06:35.493882 2527 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 00:06:35.495156 kubelet[2527]: I0425 00:06:35.494472 2527 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 25 00:06:35.495156 kubelet[2527]: E0425 00:06:35.494856 2527 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 00:06:35.556667 kubelet[2527]: I0425 00:06:35.556539 2527 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:35.556667 kubelet[2527]: I0425 00:06:35.556602 2527 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:35.556667 kubelet[2527]: I0425 00:06:35.556766 2527 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:35.604374 kubelet[2527]: I0425 00:06:35.604077 2527 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 25 00:06:35.608331 kubelet[2527]: I0425 00:06:35.608295 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19612a347cd73d0ede3a9bcf8e1d15ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19612a347cd73d0ede3a9bcf8e1d15ea\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:35.608331 kubelet[2527]: I0425 00:06:35.608323 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:35.609290 kubelet[2527]: I0425 00:06:35.608338 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:35.609290 kubelet[2527]: I0425 00:06:35.608352 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:35.609290 kubelet[2527]: I0425 00:06:35.608431 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19612a347cd73d0ede3a9bcf8e1d15ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19612a347cd73d0ede3a9bcf8e1d15ea\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:35.609290 kubelet[2527]: I0425 00:06:35.608463 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19612a347cd73d0ede3a9bcf8e1d15ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19612a347cd73d0ede3a9bcf8e1d15ea\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:35.609290 kubelet[2527]: I0425 00:06:35.608478 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:35.609503 kubelet[2527]: I0425 00:06:35.608492 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:06:35.609503 kubelet[2527]: I0425 00:06:35.608504 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:35.614785 kubelet[2527]: I0425 00:06:35.614744 2527 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 25 00:06:35.615052 kubelet[2527]: I0425 00:06:35.615022 2527 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 25 00:06:35.877047 kubelet[2527]: E0425 00:06:35.876544 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:35.877047 kubelet[2527]: E0425 00:06:35.876661 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:35.877047 kubelet[2527]: E0425 00:06:35.876549 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:36.416697 kubelet[2527]: I0425 00:06:36.416590 2527 apiserver.go:52] "Watching apiserver" Apr 25 00:06:36.477205 kubelet[2527]: I0425 00:06:36.477069 2527 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:36.478166 kubelet[2527]: E0425 00:06:36.477130 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:36.478722 kubelet[2527]: I0425 00:06:36.478685 2527 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:36.490141 kubelet[2527]: E0425 00:06:36.490057 2527 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 25 00:06:36.492440 kubelet[2527]: E0425 00:06:36.492290 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:36.494538 kubelet[2527]: E0425 00:06:36.494477 2527 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 25 00:06:36.495021 kubelet[2527]: E0425 00:06:36.494897 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:36.507870 kubelet[2527]: I0425 00:06:36.507756 2527 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 25 00:06:37.481036 kubelet[2527]: E0425 00:06:37.480829 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:37.481036 kubelet[2527]: E0425 00:06:37.480841 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:37.532757 kubelet[2527]: I0425 00:06:37.532658 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.532609306 podStartE2EDuration="2.532609306s" podCreationTimestamp="2026-04-25 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:06:37.532333405 +0000 UTC m=+2.209143642" watchObservedRunningTime="2026-04-25 00:06:37.532609306 +0000 UTC m=+2.209419531" Apr 25 00:06:37.532757 kubelet[2527]: I0425 00:06:37.532769 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.532765398 podStartE2EDuration="2.532765398s" podCreationTimestamp="2026-04-25 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:06:37.525015056 +0000 UTC m=+2.201825285" watchObservedRunningTime="2026-04-25 00:06:37.532765398 +0000 UTC m=+2.209575623" Apr 25 00:06:38.085683 kubelet[2527]: I0425 00:06:38.085556 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.085518737 podStartE2EDuration="3.085518737s" podCreationTimestamp="2026-04-25 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:06:37.539546596 +0000 UTC m=+2.216356828" watchObservedRunningTime="2026-04-25 00:06:38.085518737 +0000 UTC m=+2.762328973" Apr 25 00:06:38.487380 kubelet[2527]: E0425 00:06:38.486494 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:39.488181 kubelet[2527]: E0425 00:06:39.488072 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:41.151719 kubelet[2527]: I0425 00:06:41.151615 2527 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 25 00:06:41.152389 containerd[1482]: time="2026-04-25T00:06:41.152267866Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 25 00:06:41.154338 kubelet[2527]: I0425 00:06:41.153574 2527 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 25 00:06:41.739378 kubelet[2527]: E0425 00:06:41.739095 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:42.007678 systemd[1]: Created slice kubepods-besteffort-podb32a598a_60df_4ac2_82b2_32dd983be155.slice - libcontainer container kubepods-besteffort-podb32a598a_60df_4ac2_82b2_32dd983be155.slice. Apr 25 00:06:42.151384 kubelet[2527]: I0425 00:06:42.151194 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b32a598a-60df-4ac2-82b2-32dd983be155-kube-proxy\") pod \"kube-proxy-2g6kp\" (UID: \"b32a598a-60df-4ac2-82b2-32dd983be155\") " pod="kube-system/kube-proxy-2g6kp" Apr 25 00:06:42.151384 kubelet[2527]: I0425 00:06:42.151332 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b32a598a-60df-4ac2-82b2-32dd983be155-xtables-lock\") pod \"kube-proxy-2g6kp\" (UID: \"b32a598a-60df-4ac2-82b2-32dd983be155\") " pod="kube-system/kube-proxy-2g6kp" Apr 25 00:06:42.151384 kubelet[2527]: I0425 00:06:42.151444 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrdk\" (UniqueName: \"kubernetes.io/projected/b32a598a-60df-4ac2-82b2-32dd983be155-kube-api-access-4vrdk\") pod \"kube-proxy-2g6kp\" (UID: \"b32a598a-60df-4ac2-82b2-32dd983be155\") " pod="kube-system/kube-proxy-2g6kp" Apr 25 00:06:42.151384 kubelet[2527]: I0425 00:06:42.151460 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b32a598a-60df-4ac2-82b2-32dd983be155-lib-modules\") pod \"kube-proxy-2g6kp\" (UID: \"b32a598a-60df-4ac2-82b2-32dd983be155\") " pod="kube-system/kube-proxy-2g6kp" Apr 25 00:06:42.333134 kubelet[2527]: E0425 00:06:42.331924 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:42.336537 containerd[1482]: time="2026-04-25T00:06:42.336176726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2g6kp,Uid:b32a598a-60df-4ac2-82b2-32dd983be155,Namespace:kube-system,Attempt:0,}" Apr 25 00:06:42.405958 systemd[1]: Created slice kubepods-besteffort-pod6c6a9ddd_2eb6_4fb5_8212_703faf12ab5e.slice - libcontainer container kubepods-besteffort-pod6c6a9ddd_2eb6_4fb5_8212_703faf12ab5e.slice. Apr 25 00:06:42.420781 containerd[1482]: time="2026-04-25T00:06:42.420482577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:42.420781 containerd[1482]: time="2026-04-25T00:06:42.420685304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:42.420781 containerd[1482]: time="2026-04-25T00:06:42.420748305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:42.420781 containerd[1482]: time="2026-04-25T00:06:42.420877277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:42.755269 kubelet[2527]: I0425 00:06:42.753921 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mq9\" (UniqueName: \"kubernetes.io/projected/6c6a9ddd-2eb6-4fb5-8212-703faf12ab5e-kube-api-access-62mq9\") pod \"tigera-operator-6cf4cccc57-ftk6d\" (UID: \"6c6a9ddd-2eb6-4fb5-8212-703faf12ab5e\") " pod="tigera-operator/tigera-operator-6cf4cccc57-ftk6d" Apr 25 00:06:42.755269 kubelet[2527]: I0425 00:06:42.754321 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c6a9ddd-2eb6-4fb5-8212-703faf12ab5e-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-ftk6d\" (UID: \"6c6a9ddd-2eb6-4fb5-8212-703faf12ab5e\") " pod="tigera-operator/tigera-operator-6cf4cccc57-ftk6d" Apr 25 00:06:42.776628 systemd[1]: Started cri-containerd-79c77a0b447604ae809e153b244e23c07c6110c19350c9e0ec077d0c2fac13b8.scope - libcontainer container 79c77a0b447604ae809e153b244e23c07c6110c19350c9e0ec077d0c2fac13b8. Apr 25 00:06:42.807220 containerd[1482]: time="2026-04-25T00:06:42.807119434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2g6kp,Uid:b32a598a-60df-4ac2-82b2-32dd983be155,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c77a0b447604ae809e153b244e23c07c6110c19350c9e0ec077d0c2fac13b8\"" Apr 25 00:06:42.808194 kubelet[2527]: E0425 00:06:42.808170 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:42.816962 containerd[1482]: time="2026-04-25T00:06:42.816828999Z" level=info msg="CreateContainer within sandbox \"79c77a0b447604ae809e153b244e23c07c6110c19350c9e0ec077d0c2fac13b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 25 00:06:42.833783 containerd[1482]: time="2026-04-25T00:06:42.833723963Z" level=info msg="CreateContainer within sandbox \"79c77a0b447604ae809e153b244e23c07c6110c19350c9e0ec077d0c2fac13b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4cd418d2ea8429981b72484798c6f07efa76322b4391f78a476c794f24a685ab\"" Apr 25 00:06:42.834474 containerd[1482]: time="2026-04-25T00:06:42.834452234Z" level=info msg="StartContainer for \"4cd418d2ea8429981b72484798c6f07efa76322b4391f78a476c794f24a685ab\"" Apr 25 00:06:42.935583 systemd[1]: Started cri-containerd-4cd418d2ea8429981b72484798c6f07efa76322b4391f78a476c794f24a685ab.scope - libcontainer container 4cd418d2ea8429981b72484798c6f07efa76322b4391f78a476c794f24a685ab. Apr 25 00:06:42.961736 containerd[1482]: time="2026-04-25T00:06:42.961660331Z" level=info msg="StartContainer for \"4cd418d2ea8429981b72484798c6f07efa76322b4391f78a476c794f24a685ab\" returns successfully" Apr 25 00:06:43.020052 containerd[1482]: time="2026-04-25T00:06:43.019757954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-ftk6d,Uid:6c6a9ddd-2eb6-4fb5-8212-703faf12ab5e,Namespace:tigera-operator,Attempt:0,}" Apr 25 00:06:43.073716 containerd[1482]: time="2026-04-25T00:06:43.073374548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:43.075017 containerd[1482]: time="2026-04-25T00:06:43.074970571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:43.075072 containerd[1482]: time="2026-04-25T00:06:43.075022275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:43.075119 containerd[1482]: time="2026-04-25T00:06:43.075101571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:43.102869 systemd[1]: Started cri-containerd-f7c7142be398aed3a7d5167ec69ba9438d68382aa96c2bddf4075697034b19d3.scope - libcontainer container f7c7142be398aed3a7d5167ec69ba9438d68382aa96c2bddf4075697034b19d3. Apr 25 00:06:43.152239 containerd[1482]: time="2026-04-25T00:06:43.152132352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-ftk6d,Uid:6c6a9ddd-2eb6-4fb5-8212-703faf12ab5e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f7c7142be398aed3a7d5167ec69ba9438d68382aa96c2bddf4075697034b19d3\"" Apr 25 00:06:43.156494 containerd[1482]: time="2026-04-25T00:06:43.156465239Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 25 00:06:43.765743 kubelet[2527]: E0425 00:06:43.765623 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:43.784820 kubelet[2527]: I0425 00:06:43.784669 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2g6kp" podStartSLOduration=2.784611775 podStartE2EDuration="2.784611775s" podCreationTimestamp="2026-04-25 00:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:06:43.784206742 +0000 UTC m=+8.461016979" watchObservedRunningTime="2026-04-25 00:06:43.784611775 +0000 UTC m=+8.461422019" Apr 25 00:06:44.149499 kubelet[2527]: E0425 00:06:44.149354 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:45.149604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2991029755.mount: Deactivated successfully. Apr 25 00:06:46.023945 containerd[1482]: time="2026-04-25T00:06:46.023852191Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:46.024534 containerd[1482]: time="2026-04-25T00:06:46.024466064Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 25 00:06:46.029000 containerd[1482]: time="2026-04-25T00:06:46.028171795Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:46.032136 containerd[1482]: time="2026-04-25T00:06:46.031975796Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:46.032450 containerd[1482]: time="2026-04-25T00:06:46.032329691Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.875827562s" Apr 25 00:06:46.032489 containerd[1482]: time="2026-04-25T00:06:46.032477772Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 25 00:06:46.045173 containerd[1482]: time="2026-04-25T00:06:46.045013939Z" level=info msg="CreateContainer within sandbox \"f7c7142be398aed3a7d5167ec69ba9438d68382aa96c2bddf4075697034b19d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 25 00:06:46.068988 containerd[1482]: time="2026-04-25T00:06:46.068890534Z" level=info msg="CreateContainer within sandbox \"f7c7142be398aed3a7d5167ec69ba9438d68382aa96c2bddf4075697034b19d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"76185c0aff1cf412d75f18f2867772456b35d67d770597b707b44a709594aa6a\"" Apr 25 00:06:46.072683 containerd[1482]: time="2026-04-25T00:06:46.072543125Z" level=info msg="StartContainer for \"76185c0aff1cf412d75f18f2867772456b35d67d770597b707b44a709594aa6a\"" Apr 25 00:06:46.116562 systemd[1]: Started cri-containerd-76185c0aff1cf412d75f18f2867772456b35d67d770597b707b44a709594aa6a.scope - libcontainer container 76185c0aff1cf412d75f18f2867772456b35d67d770597b707b44a709594aa6a. Apr 25 00:06:46.147201 containerd[1482]: time="2026-04-25T00:06:46.147085069Z" level=info msg="StartContainer for \"76185c0aff1cf412d75f18f2867772456b35d67d770597b707b44a709594aa6a\" returns successfully" Apr 25 00:06:48.082052 kubelet[2527]: E0425 00:06:48.081983 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:48.102471 kubelet[2527]: I0425 00:06:48.101856 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-ftk6d" podStartSLOduration=3.222773454 podStartE2EDuration="6.101817267s" podCreationTimestamp="2026-04-25 00:06:42 +0000 UTC" firstStartedPulling="2026-04-25 00:06:43.156063846 +0000 UTC m=+7.832874071" lastFinishedPulling="2026-04-25 00:06:46.035107655 +0000 UTC m=+10.711917884" observedRunningTime="2026-04-25 00:06:47.005575057 +0000 UTC m=+11.682385294" watchObservedRunningTime="2026-04-25 00:06:48.101817267 +0000 UTC m=+12.778627499" Apr 25 00:06:51.741942 kubelet[2527]: E0425 00:06:51.741883 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:51.950908 sudo[1655]: pam_unix(sudo:session): session closed for user root Apr 25 00:06:51.954789 sshd[1652]: pam_unix(sshd:session): session closed for user core Apr 25 00:06:51.958494 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:58092.service: Deactivated successfully. Apr 25 00:06:51.961732 systemd[1]: session-7.scope: Deactivated successfully. Apr 25 00:06:51.962010 systemd[1]: session-7.scope: Consumed 5.898s CPU time, 160.1M memory peak, 0B memory swap peak. Apr 25 00:06:51.963742 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Apr 25 00:06:51.968494 systemd-logind[1458]: Removed session 7. Apr 25 00:06:52.378882 update_engine[1461]: I20260425 00:06:52.378607 1461 update_attempter.cc:509] Updating boot flags... Apr 25 00:06:52.417949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2946) Apr 25 00:06:52.476176 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2948) Apr 25 00:06:54.167993 kubelet[2527]: E0425 00:06:54.167932 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:54.678581 systemd[1]: Created slice kubepods-besteffort-pod22f45d9f_0285_4b18_aa3f_3ed622a4f761.slice - libcontainer container kubepods-besteffort-pod22f45d9f_0285_4b18_aa3f_3ed622a4f761.slice. Apr 25 00:06:54.797221 systemd[1]: Created slice kubepods-besteffort-podb75c4b55_1b59_4796_8b83_0690fe1f5671.slice - libcontainer container kubepods-besteffort-podb75c4b55_1b59_4796_8b83_0690fe1f5671.slice. Apr 25 00:06:54.813073 kubelet[2527]: I0425 00:06:54.812850 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrx6\" (UniqueName: \"kubernetes.io/projected/22f45d9f-0285-4b18-aa3f-3ed622a4f761-kube-api-access-rlrx6\") pod \"calico-typha-8fdf6df8c-zk48q\" (UID: \"22f45d9f-0285-4b18-aa3f-3ed622a4f761\") " pod="calico-system/calico-typha-8fdf6df8c-zk48q" Apr 25 00:06:54.813073 kubelet[2527]: I0425 00:06:54.813088 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/22f45d9f-0285-4b18-aa3f-3ed622a4f761-typha-certs\") pod \"calico-typha-8fdf6df8c-zk48q\" (UID: \"22f45d9f-0285-4b18-aa3f-3ed622a4f761\") " pod="calico-system/calico-typha-8fdf6df8c-zk48q" Apr 25 00:06:54.813073 kubelet[2527]: I0425 00:06:54.813111 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22f45d9f-0285-4b18-aa3f-3ed622a4f761-tigera-ca-bundle\") pod \"calico-typha-8fdf6df8c-zk48q\" (UID: \"22f45d9f-0285-4b18-aa3f-3ed622a4f761\") " pod="calico-system/calico-typha-8fdf6df8c-zk48q" Apr 25 00:06:54.913326 kubelet[2527]: I0425 00:06:54.913211 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-lib-modules\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.913326 kubelet[2527]: I0425 00:06:54.913286 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-sys-fs\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.914318 kubelet[2527]: I0425 00:06:54.914165 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-cni-net-dir\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.914318 kubelet[2527]: I0425 00:06:54.914252 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b75c4b55-1b59-4796-8b83-0690fe1f5671-node-certs\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.914318 kubelet[2527]: I0425 00:06:54.914299 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-policysync\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915514 kubelet[2527]: I0425 00:06:54.914366 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-cni-log-dir\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915514 kubelet[2527]: I0425 00:06:54.914475 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75c4b55-1b59-4796-8b83-0690fe1f5671-tigera-ca-bundle\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915514 kubelet[2527]: I0425 00:06:54.914512 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-bpffs\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915514 kubelet[2527]: I0425 00:06:54.914524 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-var-run-calico\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915514 kubelet[2527]: I0425 00:06:54.914642 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-cni-bin-dir\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915723 kubelet[2527]: I0425 00:06:54.914747 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-flexvol-driver-host\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915723 kubelet[2527]: I0425 00:06:54.914760 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-var-lib-calico\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915723 kubelet[2527]: I0425 00:06:54.914794 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-nodeproc\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915723 kubelet[2527]: I0425 00:06:54.914806 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b75c4b55-1b59-4796-8b83-0690fe1f5671-xtables-lock\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.915723 kubelet[2527]: I0425 00:06:54.914834 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpg7\" (UniqueName: \"kubernetes.io/projected/b75c4b55-1b59-4796-8b83-0690fe1f5671-kube-api-access-5lpg7\") pod \"calico-node-thkp7\" (UID: \"b75c4b55-1b59-4796-8b83-0690fe1f5671\") " pod="calico-system/calico-node-thkp7" Apr 25 00:06:54.980442 kubelet[2527]: E0425 00:06:54.978715 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:06:54.993319 kubelet[2527]: E0425 00:06:54.992900 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:55.000105 containerd[1482]: time="2026-04-25T00:06:55.000037282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8fdf6df8c-zk48q,Uid:22f45d9f-0285-4b18-aa3f-3ed622a4f761,Namespace:calico-system,Attempt:0,}" Apr 25 00:06:55.031865 kubelet[2527]: E0425 00:06:55.029732 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.031865 kubelet[2527]: W0425 00:06:55.029774 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.031865 kubelet[2527]: E0425 00:06:55.029847 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.036053 kubelet[2527]: E0425 00:06:55.034142 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.036053 kubelet[2527]: W0425 00:06:55.034160 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.036053 kubelet[2527]: E0425 00:06:55.034181 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.050017 kubelet[2527]: E0425 00:06:55.049971 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.053733 kubelet[2527]: W0425 00:06:55.051455 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.053733 kubelet[2527]: E0425 00:06:55.051507 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.077449 containerd[1482]: time="2026-04-25T00:06:55.076930233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:55.080141 containerd[1482]: time="2026-04-25T00:06:55.079619162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:55.080141 containerd[1482]: time="2026-04-25T00:06:55.079683595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:55.080141 containerd[1482]: time="2026-04-25T00:06:55.079935339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:55.116014 systemd[1]: Started cri-containerd-ed93dde02ff232e9275a8c2c4fa062b45ab84773480ce2979a4cb818b5e9054b.scope - libcontainer container ed93dde02ff232e9275a8c2c4fa062b45ab84773480ce2979a4cb818b5e9054b. Apr 25 00:06:55.119510 containerd[1482]: time="2026-04-25T00:06:55.119451781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-thkp7,Uid:b75c4b55-1b59-4796-8b83-0690fe1f5671,Namespace:calico-system,Attempt:0,}" Apr 25 00:06:55.133224 kubelet[2527]: E0425 00:06:55.133166 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.136539 kubelet[2527]: W0425 00:06:55.135598 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.138692 kubelet[2527]: E0425 00:06:55.137959 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.138898 kubelet[2527]: I0425 00:06:55.138803 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmlh\" (UniqueName: \"kubernetes.io/projected/01cf6d9e-8d92-40f8-898e-724f0af87eaf-kube-api-access-frmlh\") pod \"csi-node-driver-lfm6g\" (UID: \"01cf6d9e-8d92-40f8-898e-724f0af87eaf\") " pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:06:55.139095 kubelet[2527]: E0425 00:06:55.139087 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.139134 kubelet[2527]: W0425 00:06:55.139127 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.139205 kubelet[2527]: E0425 00:06:55.139184 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.139484 kubelet[2527]: E0425 00:06:55.139459 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.139484 kubelet[2527]: W0425 00:06:55.139467 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.139484 kubelet[2527]: E0425 00:06:55.139475 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.140150 kubelet[2527]: E0425 00:06:55.140121 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.140282 kubelet[2527]: W0425 00:06:55.140197 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.140282 kubelet[2527]: E0425 00:06:55.140207 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.140282 kubelet[2527]: I0425 00:06:55.140240 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01cf6d9e-8d92-40f8-898e-724f0af87eaf-socket-dir\") pod \"csi-node-driver-lfm6g\" (UID: \"01cf6d9e-8d92-40f8-898e-724f0af87eaf\") " pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:06:55.140575 kubelet[2527]: E0425 00:06:55.140566 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.140684 kubelet[2527]: W0425 00:06:55.140636 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.140738 kubelet[2527]: E0425 00:06:55.140664 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.140803 kubelet[2527]: I0425 00:06:55.140770 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01cf6d9e-8d92-40f8-898e-724f0af87eaf-kubelet-dir\") pod \"csi-node-driver-lfm6g\" (UID: \"01cf6d9e-8d92-40f8-898e-724f0af87eaf\") " pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:06:55.141056 kubelet[2527]: E0425 00:06:55.141015 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.141056 kubelet[2527]: W0425 00:06:55.141022 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.141056 kubelet[2527]: E0425 00:06:55.141029 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.141428 kubelet[2527]: E0425 00:06:55.141333 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.141428 kubelet[2527]: W0425 00:06:55.141340 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.141428 kubelet[2527]: E0425 00:06:55.141348 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.141647 kubelet[2527]: E0425 00:06:55.141626 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.141647 kubelet[2527]: W0425 00:06:55.141633 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.141647 kubelet[2527]: E0425 00:06:55.141640 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.142042 kubelet[2527]: E0425 00:06:55.141970 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.142042 kubelet[2527]: W0425 00:06:55.141981 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.142042 kubelet[2527]: E0425 00:06:55.142029 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.142450 kubelet[2527]: E0425 00:06:55.142309 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.142450 kubelet[2527]: W0425 00:06:55.142317 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.142450 kubelet[2527]: E0425 00:06:55.142324 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.142450 kubelet[2527]: I0425 00:06:55.142337 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/01cf6d9e-8d92-40f8-898e-724f0af87eaf-varrun\") pod \"csi-node-driver-lfm6g\" (UID: \"01cf6d9e-8d92-40f8-898e-724f0af87eaf\") " pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:06:55.142634 kubelet[2527]: E0425 00:06:55.142594 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.142634 kubelet[2527]: W0425 00:06:55.142603 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.142707 kubelet[2527]: E0425 00:06:55.142610 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.144868 kubelet[2527]: I0425 00:06:55.144758 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01cf6d9e-8d92-40f8-898e-724f0af87eaf-registration-dir\") pod \"csi-node-driver-lfm6g\" (UID: \"01cf6d9e-8d92-40f8-898e-724f0af87eaf\") " pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:06:55.145514 kubelet[2527]: E0425 00:06:55.145468 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.145582 kubelet[2527]: W0425 00:06:55.145574 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.145767 kubelet[2527]: E0425 00:06:55.145669 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.145997 kubelet[2527]: E0425 00:06:55.145990 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.146039 kubelet[2527]: W0425 00:06:55.146024 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.146039 kubelet[2527]: E0425 00:06:55.146033 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.146445 kubelet[2527]: E0425 00:06:55.146357 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.146445 kubelet[2527]: W0425 00:06:55.146364 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.146445 kubelet[2527]: E0425 00:06:55.146371 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.146741 kubelet[2527]: E0425 00:06:55.146713 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.146741 kubelet[2527]: W0425 00:06:55.146720 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.146741 kubelet[2527]: E0425 00:06:55.146727 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.203189 containerd[1482]: time="2026-04-25T00:06:55.203054926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:06:55.203189 containerd[1482]: time="2026-04-25T00:06:55.203124302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:06:55.203189 containerd[1482]: time="2026-04-25T00:06:55.203135765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:55.203710 containerd[1482]: time="2026-04-25T00:06:55.203607779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:06:55.211285 containerd[1482]: time="2026-04-25T00:06:55.211243747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8fdf6df8c-zk48q,Uid:22f45d9f-0285-4b18-aa3f-3ed622a4f761,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed93dde02ff232e9275a8c2c4fa062b45ab84773480ce2979a4cb818b5e9054b\"" Apr 25 00:06:55.212081 kubelet[2527]: E0425 00:06:55.212048 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:55.219826 containerd[1482]: time="2026-04-25T00:06:55.219732996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 25 00:06:55.221625 systemd[1]: Started cri-containerd-b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c.scope - libcontainer container b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c. Apr 25 00:06:55.241955 containerd[1482]: time="2026-04-25T00:06:55.241836062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-thkp7,Uid:b75c4b55-1b59-4796-8b83-0690fe1f5671,Namespace:calico-system,Attempt:0,} returns sandbox id \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\"" Apr 25 00:06:55.246049 kubelet[2527]: E0425 00:06:55.246029 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.246049 kubelet[2527]: W0425 00:06:55.246043 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.246217 kubelet[2527]: E0425 00:06:55.246059 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.246309 kubelet[2527]: E0425 00:06:55.246297 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.246309 kubelet[2527]: W0425 00:06:55.246309 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.246387 kubelet[2527]: E0425 00:06:55.246316 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.246706 kubelet[2527]: E0425 00:06:55.246681 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.246706 kubelet[2527]: W0425 00:06:55.246698 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.246791 kubelet[2527]: E0425 00:06:55.246707 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.246962 kubelet[2527]: E0425 00:06:55.246920 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.246962 kubelet[2527]: W0425 00:06:55.246933 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.246962 kubelet[2527]: E0425 00:06:55.246939 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.247143 kubelet[2527]: E0425 00:06:55.247131 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.247162 kubelet[2527]: W0425 00:06:55.247144 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.247162 kubelet[2527]: E0425 00:06:55.247151 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.247424 kubelet[2527]: E0425 00:06:55.247383 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.247424 kubelet[2527]: W0425 00:06:55.247416 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.247479 kubelet[2527]: E0425 00:06:55.247423 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.247662 kubelet[2527]: E0425 00:06:55.247636 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.247703 kubelet[2527]: W0425 00:06:55.247648 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.247703 kubelet[2527]: E0425 00:06:55.247671 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.247843 kubelet[2527]: E0425 00:06:55.247806 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.247843 kubelet[2527]: W0425 00:06:55.247818 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.247843 kubelet[2527]: E0425 00:06:55.247823 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.247989 kubelet[2527]: E0425 00:06:55.247964 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.247989 kubelet[2527]: W0425 00:06:55.247975 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.247989 kubelet[2527]: E0425 00:06:55.247982 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.248152 kubelet[2527]: E0425 00:06:55.248132 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.248152 kubelet[2527]: W0425 00:06:55.248144 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.248152 kubelet[2527]: E0425 00:06:55.248150 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.248313 kubelet[2527]: E0425 00:06:55.248273 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.248313 kubelet[2527]: W0425 00:06:55.248287 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.248313 kubelet[2527]: E0425 00:06:55.248295 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.248500 kubelet[2527]: E0425 00:06:55.248485 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.248524 kubelet[2527]: W0425 00:06:55.248500 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.248524 kubelet[2527]: E0425 00:06:55.248509 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.248848 kubelet[2527]: E0425 00:06:55.248834 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.248848 kubelet[2527]: W0425 00:06:55.248847 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.248928 kubelet[2527]: E0425 00:06:55.248854 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.249013 kubelet[2527]: E0425 00:06:55.249000 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.249013 kubelet[2527]: W0425 00:06:55.249011 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.249113 kubelet[2527]: E0425 00:06:55.249018 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.249183 kubelet[2527]: E0425 00:06:55.249171 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.249183 kubelet[2527]: W0425 00:06:55.249183 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.249239 kubelet[2527]: E0425 00:06:55.249188 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.249358 kubelet[2527]: E0425 00:06:55.249345 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.249380 kubelet[2527]: W0425 00:06:55.249358 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.249380 kubelet[2527]: E0425 00:06:55.249363 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.249582 kubelet[2527]: E0425 00:06:55.249569 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.249582 kubelet[2527]: W0425 00:06:55.249581 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.249647 kubelet[2527]: E0425 00:06:55.249587 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.249776 kubelet[2527]: E0425 00:06:55.249764 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.249776 kubelet[2527]: W0425 00:06:55.249771 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.249810 kubelet[2527]: E0425 00:06:55.249777 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.249994 kubelet[2527]: E0425 00:06:55.249981 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.249994 kubelet[2527]: W0425 00:06:55.249993 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.250052 kubelet[2527]: E0425 00:06:55.249998 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.250193 kubelet[2527]: E0425 00:06:55.250147 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.250193 kubelet[2527]: W0425 00:06:55.250158 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.250193 kubelet[2527]: E0425 00:06:55.250165 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.250379 kubelet[2527]: E0425 00:06:55.250366 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.250379 kubelet[2527]: W0425 00:06:55.250378 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.250481 kubelet[2527]: E0425 00:06:55.250384 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.250572 kubelet[2527]: E0425 00:06:55.250559 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.250572 kubelet[2527]: W0425 00:06:55.250571 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.250607 kubelet[2527]: E0425 00:06:55.250576 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.250762 kubelet[2527]: E0425 00:06:55.250750 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.250762 kubelet[2527]: W0425 00:06:55.250761 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.250805 kubelet[2527]: E0425 00:06:55.250766 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.250955 kubelet[2527]: E0425 00:06:55.250944 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.250955 kubelet[2527]: W0425 00:06:55.250951 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.250955 kubelet[2527]: E0425 00:06:55.250956 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.251289 kubelet[2527]: E0425 00:06:55.251274 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.251289 kubelet[2527]: W0425 00:06:55.251287 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.251336 kubelet[2527]: E0425 00:06:55.251294 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:55.257954 kubelet[2527]: E0425 00:06:55.257936 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:55.257954 kubelet[2527]: W0425 00:06:55.257951 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:55.258065 kubelet[2527]: E0425 00:06:55.257961 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:56.504797 kubelet[2527]: E0425 00:06:56.504596 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:06:56.981854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338457429.mount: Deactivated successfully. Apr 25 00:06:57.847163 containerd[1482]: time="2026-04-25T00:06:57.846932337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:57.851603 containerd[1482]: time="2026-04-25T00:06:57.851537557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 25 00:06:57.853591 containerd[1482]: time="2026-04-25T00:06:57.852786452Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:57.855246 containerd[1482]: time="2026-04-25T00:06:57.855215833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:57.855800 containerd[1482]: time="2026-04-25T00:06:57.855762432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.635995345s" Apr 25 00:06:57.855800 containerd[1482]: time="2026-04-25T00:06:57.855793955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 25 00:06:57.856966 containerd[1482]: time="2026-04-25T00:06:57.856937089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 25 00:06:57.874346 containerd[1482]: time="2026-04-25T00:06:57.874291109Z" level=info msg="CreateContainer within sandbox \"ed93dde02ff232e9275a8c2c4fa062b45ab84773480ce2979a4cb818b5e9054b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 25 00:06:57.894029 containerd[1482]: time="2026-04-25T00:06:57.893894190Z" level=info msg="CreateContainer within sandbox \"ed93dde02ff232e9275a8c2c4fa062b45ab84773480ce2979a4cb818b5e9054b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c78b65cd62f9c98a18a014a6d07edd4619d3a397de6aee0eabb1460d40b3713d\"" Apr 25 00:06:57.896002 containerd[1482]: time="2026-04-25T00:06:57.895957039Z" level=info msg="StartContainer for \"c78b65cd62f9c98a18a014a6d07edd4619d3a397de6aee0eabb1460d40b3713d\"" Apr 25 00:06:57.957689 systemd[1]: Started cri-containerd-c78b65cd62f9c98a18a014a6d07edd4619d3a397de6aee0eabb1460d40b3713d.scope - libcontainer container c78b65cd62f9c98a18a014a6d07edd4619d3a397de6aee0eabb1460d40b3713d. Apr 25 00:06:57.999114 containerd[1482]: time="2026-04-25T00:06:57.999021747Z" level=info msg="StartContainer for \"c78b65cd62f9c98a18a014a6d07edd4619d3a397de6aee0eabb1460d40b3713d\" returns successfully" Apr 25 00:06:58.068002 kubelet[2527]: E0425 00:06:58.067626 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:58.075892 kubelet[2527]: E0425 00:06:58.075788 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.075892 kubelet[2527]: W0425 00:06:58.075810 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.075892 kubelet[2527]: E0425 00:06:58.075935 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.075892 kubelet[2527]: E0425 00:06:58.076202 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.075892 kubelet[2527]: W0425 00:06:58.076209 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.076846 kubelet[2527]: E0425 00:06:58.076221 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.076846 kubelet[2527]: E0425 00:06:58.076474 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.076846 kubelet[2527]: W0425 00:06:58.076481 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.076846 kubelet[2527]: E0425 00:06:58.076489 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.076846 kubelet[2527]: E0425 00:06:58.076707 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.076846 kubelet[2527]: W0425 00:06:58.076713 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.076846 kubelet[2527]: E0425 00:06:58.076721 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.076956 kubelet[2527]: E0425 00:06:58.076876 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.076956 kubelet[2527]: W0425 00:06:58.076880 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.076956 kubelet[2527]: E0425 00:06:58.076886 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.077028 kubelet[2527]: E0425 00:06:58.077002 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.077028 kubelet[2527]: W0425 00:06:58.077011 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.077028 kubelet[2527]: E0425 00:06:58.077017 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.077459 kubelet[2527]: E0425 00:06:58.077277 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.077459 kubelet[2527]: W0425 00:06:58.077288 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.077459 kubelet[2527]: E0425 00:06:58.077301 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.080138 kubelet[2527]: E0425 00:06:58.080008 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.080138 kubelet[2527]: W0425 00:06:58.080020 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.080138 kubelet[2527]: E0425 00:06:58.080031 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.080921 kubelet[2527]: E0425 00:06:58.080806 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.080921 kubelet[2527]: W0425 00:06:58.080815 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.080921 kubelet[2527]: E0425 00:06:58.080825 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.081191 kubelet[2527]: E0425 00:06:58.081173 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.081239 kubelet[2527]: W0425 00:06:58.081232 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.081310 kubelet[2527]: E0425 00:06:58.081291 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.081637 kubelet[2527]: E0425 00:06:58.081578 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.081637 kubelet[2527]: W0425 00:06:58.081588 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.081637 kubelet[2527]: E0425 00:06:58.081599 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.081997 kubelet[2527]: E0425 00:06:58.081953 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.081997 kubelet[2527]: W0425 00:06:58.081961 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.081997 kubelet[2527]: E0425 00:06:58.081968 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.082369 kubelet[2527]: E0425 00:06:58.082300 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.082369 kubelet[2527]: W0425 00:06:58.082312 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.082369 kubelet[2527]: E0425 00:06:58.082323 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.082626 kubelet[2527]: E0425 00:06:58.082619 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.082719 kubelet[2527]: W0425 00:06:58.082680 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.082719 kubelet[2527]: E0425 00:06:58.082690 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.082886 kubelet[2527]: E0425 00:06:58.082880 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.082971 kubelet[2527]: W0425 00:06:58.082918 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.082971 kubelet[2527]: E0425 00:06:58.082926 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.086539 kubelet[2527]: I0425 00:06:58.085871 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-8fdf6df8c-zk48q" podStartSLOduration=1.446665734 podStartE2EDuration="4.085776882s" podCreationTimestamp="2026-04-25 00:06:54 +0000 UTC" firstStartedPulling="2026-04-25 00:06:55.217477304 +0000 UTC m=+19.894287528" lastFinishedPulling="2026-04-25 00:06:57.856588449 +0000 UTC m=+22.533398676" observedRunningTime="2026-04-25 00:06:58.084037675 +0000 UTC m=+22.760847912" watchObservedRunningTime="2026-04-25 00:06:58.085776882 +0000 UTC m=+22.762587107" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.139073 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.144746 kubelet[2527]: W0425 00:06:58.139140 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.139212 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.139548 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.144746 kubelet[2527]: W0425 00:06:58.139557 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.139568 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.139943 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.144746 kubelet[2527]: W0425 00:06:58.139951 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.139961 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.144746 kubelet[2527]: E0425 00:06:58.140240 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145233 kubelet[2527]: W0425 00:06:58.140247 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145233 kubelet[2527]: E0425 00:06:58.140255 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.145233 kubelet[2527]: E0425 00:06:58.140428 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145233 kubelet[2527]: W0425 00:06:58.140435 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145233 kubelet[2527]: E0425 00:06:58.140443 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.145233 kubelet[2527]: E0425 00:06:58.141294 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145233 kubelet[2527]: W0425 00:06:58.141303 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145233 kubelet[2527]: E0425 00:06:58.141316 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.145233 kubelet[2527]: E0425 00:06:58.141576 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145233 kubelet[2527]: W0425 00:06:58.141586 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.141594 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.143962 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145386 kubelet[2527]: W0425 00:06:58.144035 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.144060 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.144343 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145386 kubelet[2527]: W0425 00:06:58.144350 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.144358 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.144597 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.145386 kubelet[2527]: W0425 00:06:58.144603 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.145386 kubelet[2527]: E0425 00:06:58.144613 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.147083 kubelet[2527]: E0425 00:06:58.147051 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.147083 kubelet[2527]: W0425 00:06:58.147077 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.147182 kubelet[2527]: E0425 00:06:58.147090 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.147485 kubelet[2527]: E0425 00:06:58.147460 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.147485 kubelet[2527]: W0425 00:06:58.147479 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.150201 kubelet[2527]: E0425 00:06:58.147490 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.150201 kubelet[2527]: E0425 00:06:58.147950 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.150201 kubelet[2527]: W0425 00:06:58.147958 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.150201 kubelet[2527]: E0425 00:06:58.147968 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.151523 kubelet[2527]: E0425 00:06:58.151445 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.151577 kubelet[2527]: W0425 00:06:58.151528 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.151746 kubelet[2527]: E0425 00:06:58.151636 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.152046 kubelet[2527]: E0425 00:06:58.152016 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.152046 kubelet[2527]: W0425 00:06:58.152043 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.152120 kubelet[2527]: E0425 00:06:58.152056 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.154104 kubelet[2527]: E0425 00:06:58.153729 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.154104 kubelet[2527]: W0425 00:06:58.154043 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.154728 kubelet[2527]: E0425 00:06:58.154371 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.156569 kubelet[2527]: E0425 00:06:58.155019 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.156569 kubelet[2527]: W0425 00:06:58.155047 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.156569 kubelet[2527]: E0425 00:06:58.155077 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.157574 kubelet[2527]: E0425 00:06:58.157439 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:58.157609 kubelet[2527]: W0425 00:06:58.157578 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:58.157765 kubelet[2527]: E0425 00:06:58.157690 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:58.457133 kubelet[2527]: E0425 00:06:58.456440 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:06:59.073768 kubelet[2527]: I0425 00:06:59.073022 2527 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:06:59.075988 kubelet[2527]: E0425 00:06:59.075838 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:06:59.092369 kubelet[2527]: E0425 00:06:59.092227 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.092369 kubelet[2527]: W0425 00:06:59.092252 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.092369 kubelet[2527]: E0425 00:06:59.092315 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.092369 kubelet[2527]: E0425 00:06:59.092549 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.092369 kubelet[2527]: W0425 00:06:59.092555 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.092369 kubelet[2527]: E0425 00:06:59.092563 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.092784 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.093810 kubelet[2527]: W0425 00:06:59.092790 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.092797 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.092959 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.093810 kubelet[2527]: W0425 00:06:59.092964 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.092969 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.093111 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.093810 kubelet[2527]: W0425 00:06:59.093136 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.093153 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.093810 kubelet[2527]: E0425 00:06:59.093275 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094065 kubelet[2527]: W0425 00:06:59.093279 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094065 kubelet[2527]: E0425 00:06:59.093284 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094065 kubelet[2527]: E0425 00:06:59.093469 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094065 kubelet[2527]: W0425 00:06:59.093474 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094065 kubelet[2527]: E0425 00:06:59.093480 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094065 kubelet[2527]: E0425 00:06:59.093610 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094065 kubelet[2527]: W0425 00:06:59.093614 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094065 kubelet[2527]: E0425 00:06:59.093620 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094065 kubelet[2527]: E0425 00:06:59.093892 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094065 kubelet[2527]: W0425 00:06:59.093898 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.093905 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.094038 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094350 kubelet[2527]: W0425 00:06:59.094043 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.094048 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.094167 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094350 kubelet[2527]: W0425 00:06:59.094171 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.094177 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.094321 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094350 kubelet[2527]: W0425 00:06:59.094327 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094350 kubelet[2527]: E0425 00:06:59.094336 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094685 kubelet[2527]: E0425 00:06:59.094562 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094685 kubelet[2527]: W0425 00:06:59.094569 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094685 kubelet[2527]: E0425 00:06:59.094578 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.094759 kubelet[2527]: E0425 00:06:59.094753 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.094759 kubelet[2527]: W0425 00:06:59.094758 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.094821 kubelet[2527]: E0425 00:06:59.094763 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.095007 kubelet[2527]: E0425 00:06:59.094965 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.095007 kubelet[2527]: W0425 00:06:59.094979 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.095007 kubelet[2527]: E0425 00:06:59.094985 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.157942 kubelet[2527]: E0425 00:06:59.157784 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.157942 kubelet[2527]: W0425 00:06:59.157863 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.157942 kubelet[2527]: E0425 00:06:59.157946 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.161540 kubelet[2527]: E0425 00:06:59.158277 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.161540 kubelet[2527]: W0425 00:06:59.158286 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.161540 kubelet[2527]: E0425 00:06:59.158296 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.161540 kubelet[2527]: E0425 00:06:59.161148 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.164877 kubelet[2527]: W0425 00:06:59.164674 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.165219 kubelet[2527]: E0425 00:06:59.165198 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.165802 kubelet[2527]: E0425 00:06:59.165763 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.165944 kubelet[2527]: W0425 00:06:59.165895 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.166013 kubelet[2527]: E0425 00:06:59.165920 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.166393 kubelet[2527]: E0425 00:06:59.166372 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.166393 kubelet[2527]: W0425 00:06:59.166388 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.166542 kubelet[2527]: E0425 00:06:59.166455 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.166716 kubelet[2527]: E0425 00:06:59.166697 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.166716 kubelet[2527]: W0425 00:06:59.166714 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.166798 kubelet[2527]: E0425 00:06:59.166725 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.167302 kubelet[2527]: E0425 00:06:59.166893 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.167302 kubelet[2527]: W0425 00:06:59.166934 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.167302 kubelet[2527]: E0425 00:06:59.166941 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.167302 kubelet[2527]: E0425 00:06:59.167094 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.167302 kubelet[2527]: W0425 00:06:59.167100 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.167302 kubelet[2527]: E0425 00:06:59.167106 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.168060 kubelet[2527]: E0425 00:06:59.167358 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.168060 kubelet[2527]: W0425 00:06:59.167364 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.168060 kubelet[2527]: E0425 00:06:59.167371 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.168060 kubelet[2527]: E0425 00:06:59.167670 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.168060 kubelet[2527]: W0425 00:06:59.167690 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.168060 kubelet[2527]: E0425 00:06:59.167702 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.168060 kubelet[2527]: E0425 00:06:59.167944 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.168060 kubelet[2527]: W0425 00:06:59.167952 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.168060 kubelet[2527]: E0425 00:06:59.167961 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.168559 kubelet[2527]: E0425 00:06:59.168542 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.168559 kubelet[2527]: W0425 00:06:59.168558 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.168606 kubelet[2527]: E0425 00:06:59.168569 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.168904 kubelet[2527]: E0425 00:06:59.168887 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.168926 kubelet[2527]: W0425 00:06:59.168904 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.168926 kubelet[2527]: E0425 00:06:59.168915 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.169088 kubelet[2527]: E0425 00:06:59.169075 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.169088 kubelet[2527]: W0425 00:06:59.169086 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.169121 kubelet[2527]: E0425 00:06:59.169093 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.169296 kubelet[2527]: E0425 00:06:59.169281 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.169296 kubelet[2527]: W0425 00:06:59.169292 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.169296 kubelet[2527]: E0425 00:06:59.169299 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.169602 kubelet[2527]: E0425 00:06:59.169584 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.169625 kubelet[2527]: W0425 00:06:59.169602 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.169625 kubelet[2527]: E0425 00:06:59.169613 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.169815 kubelet[2527]: E0425 00:06:59.169802 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.169833 kubelet[2527]: W0425 00:06:59.169814 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.169833 kubelet[2527]: E0425 00:06:59.169821 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.172037 kubelet[2527]: E0425 00:06:59.171870 2527 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:06:59.172037 kubelet[2527]: W0425 00:06:59.171996 2527 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:06:59.172757 kubelet[2527]: E0425 00:06:59.172316 2527 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:06:59.635330 containerd[1482]: time="2026-04-25T00:06:59.635208971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:59.636937 containerd[1482]: time="2026-04-25T00:06:59.636806496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 25 00:06:59.637816 containerd[1482]: time="2026-04-25T00:06:59.637766887Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:59.641573 containerd[1482]: time="2026-04-25T00:06:59.641360908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:06:59.644810 containerd[1482]: time="2026-04-25T00:06:59.644678182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.787692579s" Apr 25 00:06:59.644810 containerd[1482]: time="2026-04-25T00:06:59.644743451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 25 00:06:59.658142 containerd[1482]: time="2026-04-25T00:06:59.658005866Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 25 00:06:59.685635 containerd[1482]: time="2026-04-25T00:06:59.685485683Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98\"" Apr 25 00:06:59.688505 containerd[1482]: time="2026-04-25T00:06:59.688470180Z" level=info msg="StartContainer for \"d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98\"" Apr 25 00:06:59.738922 systemd[1]: Started cri-containerd-d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98.scope - libcontainer container d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98. Apr 25 00:06:59.811504 containerd[1482]: time="2026-04-25T00:06:59.811205257Z" level=info msg="StartContainer for \"d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98\" returns successfully" Apr 25 00:06:59.813813 systemd[1]: cri-containerd-d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98.scope: Deactivated successfully. Apr 25 00:06:59.855354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98-rootfs.mount: Deactivated successfully. Apr 25 00:06:59.918877 containerd[1482]: time="2026-04-25T00:06:59.916061253Z" level=info msg="shim disconnected" id=d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98 namespace=k8s.io Apr 25 00:06:59.920654 containerd[1482]: time="2026-04-25T00:06:59.919291553Z" level=warning msg="cleaning up after shim disconnected" id=d743f0bcf178a70d845e56228b2a90f1c9e68249f186866b646f2e9c7e180b98 namespace=k8s.io Apr 25 00:06:59.920957 containerd[1482]: time="2026-04-25T00:06:59.920836785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:07:00.012594 containerd[1482]: time="2026-04-25T00:07:00.012417755Z" level=warning msg="cleanup warnings time=\"2026-04-25T00:07:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 25 00:07:00.078839 containerd[1482]: time="2026-04-25T00:07:00.078770833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 25 00:07:00.454797 kubelet[2527]: E0425 00:07:00.454678 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:02.479616 kubelet[2527]: E0425 00:07:02.479317 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:04.500979 kubelet[2527]: E0425 00:07:04.500790 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:05.132374 kubelet[2527]: I0425 00:07:05.132142 2527 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:07:05.135586 kubelet[2527]: E0425 00:07:05.132887 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:06.119909 kubelet[2527]: E0425 00:07:06.118951 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:06.456859 kubelet[2527]: E0425 00:07:06.456243 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:08.492906 kubelet[2527]: E0425 00:07:08.492648 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:09.040517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906354581.mount: Deactivated successfully. Apr 25 00:07:09.252038 containerd[1482]: time="2026-04-25T00:07:09.251903792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:09.253204 containerd[1482]: time="2026-04-25T00:07:09.252830505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 25 00:07:09.265535 containerd[1482]: time="2026-04-25T00:07:09.263506177Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:09.288262 containerd[1482]: time="2026-04-25T00:07:09.288051033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:09.288917 containerd[1482]: time="2026-04-25T00:07:09.288520106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.209708447s" Apr 25 00:07:09.288917 containerd[1482]: time="2026-04-25T00:07:09.288569003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 25 00:07:09.313752 containerd[1482]: time="2026-04-25T00:07:09.313599229Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 25 00:07:09.465452 containerd[1482]: time="2026-04-25T00:07:09.465238270Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa\"" Apr 25 00:07:09.472430 containerd[1482]: time="2026-04-25T00:07:09.472325835Z" level=info msg="StartContainer for \"8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa\"" Apr 25 00:07:09.585151 systemd[1]: Started cri-containerd-8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa.scope - libcontainer container 8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa. Apr 25 00:07:09.675583 containerd[1482]: time="2026-04-25T00:07:09.675351190Z" level=info msg="StartContainer for \"8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa\" returns successfully" Apr 25 00:07:09.820917 systemd[1]: cri-containerd-8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa.scope: Deactivated successfully. Apr 25 00:07:09.858467 containerd[1482]: time="2026-04-25T00:07:09.857902197Z" level=info msg="shim disconnected" id=8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa namespace=k8s.io Apr 25 00:07:09.858467 containerd[1482]: time="2026-04-25T00:07:09.858114362Z" level=warning msg="cleaning up after shim disconnected" id=8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa namespace=k8s.io Apr 25 00:07:09.858467 containerd[1482]: time="2026-04-25T00:07:09.858138939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:07:10.037492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8acaf72d5b913a6bf7767b8f83e3889dcdd2d4b8217e297b284cfb0487702faa-rootfs.mount: Deactivated successfully. Apr 25 00:07:10.151360 containerd[1482]: time="2026-04-25T00:07:10.150730009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 25 00:07:10.457886 kubelet[2527]: E0425 00:07:10.456720 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:12.476969 kubelet[2527]: E0425 00:07:12.475995 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:14.456100 kubelet[2527]: E0425 00:07:14.455832 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:14.880387 containerd[1482]: time="2026-04-25T00:07:14.880128686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:14.882241 containerd[1482]: time="2026-04-25T00:07:14.881220483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 25 00:07:14.882241 containerd[1482]: time="2026-04-25T00:07:14.881974893Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:14.885087 containerd[1482]: time="2026-04-25T00:07:14.885036845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:14.889067 containerd[1482]: time="2026-04-25T00:07:14.888864125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.737988271s" Apr 25 00:07:14.889067 containerd[1482]: time="2026-04-25T00:07:14.888960224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 25 00:07:14.902518 containerd[1482]: time="2026-04-25T00:07:14.902278518Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 25 00:07:14.968592 containerd[1482]: time="2026-04-25T00:07:14.968471146Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75\"" Apr 25 00:07:14.971462 containerd[1482]: time="2026-04-25T00:07:14.969679622Z" level=info msg="StartContainer for \"26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75\"" Apr 25 00:07:15.089794 systemd[1]: Started cri-containerd-26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75.scope - libcontainer container 26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75. Apr 25 00:07:15.207767 containerd[1482]: time="2026-04-25T00:07:15.205518951Z" level=info msg="StartContainer for \"26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75\" returns successfully" Apr 25 00:07:16.041247 systemd[1]: cri-containerd-26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75.scope: Deactivated successfully. Apr 25 00:07:16.094220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75-rootfs.mount: Deactivated successfully. Apr 25 00:07:16.109100 containerd[1482]: time="2026-04-25T00:07:16.108875642Z" level=info msg="shim disconnected" id=26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75 namespace=k8s.io Apr 25 00:07:16.109100 containerd[1482]: time="2026-04-25T00:07:16.108973457Z" level=warning msg="cleaning up after shim disconnected" id=26076168ea5dee7e5ee40eee4e6d48d0fec32a91a2bbce93a89134f5a5336a75 namespace=k8s.io Apr 25 00:07:16.109100 containerd[1482]: time="2026-04-25T00:07:16.108980658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:07:16.111297 kubelet[2527]: I0425 00:07:16.111269 2527 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 25 00:07:16.270449 containerd[1482]: time="2026-04-25T00:07:16.269641032Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 25 00:07:16.271603 systemd[1]: Created slice kubepods-besteffort-podc6c7637f_a019_4d59_9ef0_740af17d1030.slice - libcontainer container kubepods-besteffort-podc6c7637f_a019_4d59_9ef0_740af17d1030.slice. Apr 25 00:07:16.281314 systemd[1]: Created slice kubepods-besteffort-podb5fc7c52_af2d_46b0_991d_30bddad7f47f.slice - libcontainer container kubepods-besteffort-podb5fc7c52_af2d_46b0_991d_30bddad7f47f.slice. Apr 25 00:07:16.285162 systemd[1]: Created slice kubepods-burstable-pod7f40b2e4_ac3e_4645_a45c_301ecaa49eb6.slice - libcontainer container kubepods-burstable-pod7f40b2e4_ac3e_4645_a45c_301ecaa49eb6.slice. Apr 25 00:07:16.311051 systemd[1]: Created slice kubepods-besteffort-pod32405509_dcbc_4cec_81f7_5a2871f23270.slice - libcontainer container kubepods-besteffort-pod32405509_dcbc_4cec_81f7_5a2871f23270.slice. Apr 25 00:07:16.317962 containerd[1482]: time="2026-04-25T00:07:16.317818140Z" level=info msg="CreateContainer within sandbox \"b39742eb52cc518bc448bcd7da0bbc31d8f34310fc6482173f9f9c51a538e35c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8c426a186c31dba8e97df7069ef09fa3ad9f227780f987b925330b4d75f051f6\"" Apr 25 00:07:16.323653 containerd[1482]: time="2026-04-25T00:07:16.321198818Z" level=info msg="StartContainer for \"8c426a186c31dba8e97df7069ef09fa3ad9f227780f987b925330b4d75f051f6\"" Apr 25 00:07:16.326015 systemd[1]: Created slice kubepods-besteffort-pod790ad1c5_039c_4993_a342_0383d9c1d881.slice - libcontainer container kubepods-besteffort-pod790ad1c5_039c_4993_a342_0383d9c1d881.slice. Apr 25 00:07:16.327628 kubelet[2527]: I0425 00:07:16.327143 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmvz\" (UniqueName: \"kubernetes.io/projected/6832a17a-615d-4511-81ef-78e8a9d1028f-kube-api-access-jvmvz\") pod \"coredns-7d764666f9-qhqdg\" (UID: \"6832a17a-615d-4511-81ef-78e8a9d1028f\") " pod="kube-system/coredns-7d764666f9-qhqdg" Apr 25 00:07:16.327786 kubelet[2527]: I0425 00:07:16.327768 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5fc7c52-af2d-46b0-991d-30bddad7f47f-tigera-ca-bundle\") pod \"calico-kube-controllers-746bc57ccf-bcnv6\" (UID: \"b5fc7c52-af2d-46b0-991d-30bddad7f47f\") " pod="calico-system/calico-kube-controllers-746bc57ccf-bcnv6" Apr 25 00:07:16.328141 kubelet[2527]: I0425 00:07:16.328131 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7t5g\" (UniqueName: \"kubernetes.io/projected/7f40b2e4-ac3e-4645-a45c-301ecaa49eb6-kube-api-access-b7t5g\") pod \"coredns-7d764666f9-2wggw\" (UID: \"7f40b2e4-ac3e-4645-a45c-301ecaa49eb6\") " pod="kube-system/coredns-7d764666f9-2wggw" Apr 25 00:07:16.328624 kubelet[2527]: I0425 00:07:16.328195 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6832a17a-615d-4511-81ef-78e8a9d1028f-config-volume\") pod \"coredns-7d764666f9-qhqdg\" (UID: \"6832a17a-615d-4511-81ef-78e8a9d1028f\") " pod="kube-system/coredns-7d764666f9-qhqdg" Apr 25 00:07:16.328709 kubelet[2527]: I0425 00:07:16.328699 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-backend-key-pair\") pod \"whisker-b49f7945-25tvh\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " pod="calico-system/whisker-b49f7945-25tvh" Apr 25 00:07:16.329247 kubelet[2527]: I0425 00:07:16.329234 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9b5j\" (UniqueName: \"kubernetes.io/projected/32405509-dcbc-4cec-81f7-5a2871f23270-kube-api-access-s9b5j\") pod \"calico-apiserver-7bfd9bd8c4-dkdwn\" (UID: \"32405509-dcbc-4cec-81f7-5a2871f23270\") " pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" Apr 25 00:07:16.329333 kubelet[2527]: I0425 00:07:16.329317 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xzs\" (UniqueName: \"kubernetes.io/projected/c6c7637f-a019-4d59-9ef0-740af17d1030-kube-api-access-c2xzs\") pod \"calico-apiserver-7bfd9bd8c4-6pnn8\" (UID: \"c6c7637f-a019-4d59-9ef0-740af17d1030\") " pod="calico-system/calico-apiserver-7bfd9bd8c4-6pnn8" Apr 25 00:07:16.329373 kubelet[2527]: I0425 00:07:16.329367 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8a7f5e11-f035-4efb-b64e-0dc54e087c6e-goldmane-key-pair\") pod \"goldmane-9f7667bb8-wxzlr\" (UID: \"8a7f5e11-f035-4efb-b64e-0dc54e087c6e\") " pod="calico-system/goldmane-9f7667bb8-wxzlr" Apr 25 00:07:16.329504 kubelet[2527]: I0425 00:07:16.329496 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-ca-bundle\") pod \"whisker-b49f7945-25tvh\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " pod="calico-system/whisker-b49f7945-25tvh" Apr 25 00:07:16.329552 kubelet[2527]: I0425 00:07:16.329546 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32405509-dcbc-4cec-81f7-5a2871f23270-calico-apiserver-certs\") pod \"calico-apiserver-7bfd9bd8c4-dkdwn\" (UID: \"32405509-dcbc-4cec-81f7-5a2871f23270\") " pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" Apr 25 00:07:16.329589 kubelet[2527]: I0425 00:07:16.329583 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7f5e11-f035-4efb-b64e-0dc54e087c6e-config\") pod \"goldmane-9f7667bb8-wxzlr\" (UID: \"8a7f5e11-f035-4efb-b64e-0dc54e087c6e\") " pod="calico-system/goldmane-9f7667bb8-wxzlr" Apr 25 00:07:16.329636 kubelet[2527]: I0425 00:07:16.329629 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjrd\" (UniqueName: \"kubernetes.io/projected/8a7f5e11-f035-4efb-b64e-0dc54e087c6e-kube-api-access-htjrd\") pod \"goldmane-9f7667bb8-wxzlr\" (UID: \"8a7f5e11-f035-4efb-b64e-0dc54e087c6e\") " pod="calico-system/goldmane-9f7667bb8-wxzlr" Apr 25 00:07:16.329679 kubelet[2527]: I0425 00:07:16.329674 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c6c7637f-a019-4d59-9ef0-740af17d1030-calico-apiserver-certs\") pod \"calico-apiserver-7bfd9bd8c4-6pnn8\" (UID: \"c6c7637f-a019-4d59-9ef0-740af17d1030\") " pod="calico-system/calico-apiserver-7bfd9bd8c4-6pnn8" Apr 25 00:07:16.329780 kubelet[2527]: I0425 00:07:16.329772 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-nginx-config\") pod \"whisker-b49f7945-25tvh\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " pod="calico-system/whisker-b49f7945-25tvh" Apr 25 00:07:16.329832 kubelet[2527]: I0425 00:07:16.329820 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wrx5\" (UniqueName: \"kubernetes.io/projected/b5fc7c52-af2d-46b0-991d-30bddad7f47f-kube-api-access-8wrx5\") pod \"calico-kube-controllers-746bc57ccf-bcnv6\" (UID: \"b5fc7c52-af2d-46b0-991d-30bddad7f47f\") " pod="calico-system/calico-kube-controllers-746bc57ccf-bcnv6" Apr 25 00:07:16.329869 kubelet[2527]: I0425 00:07:16.329863 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7f5e11-f035-4efb-b64e-0dc54e087c6e-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-wxzlr\" (UID: \"8a7f5e11-f035-4efb-b64e-0dc54e087c6e\") " pod="calico-system/goldmane-9f7667bb8-wxzlr" Apr 25 00:07:16.329920 kubelet[2527]: I0425 00:07:16.329913 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnsc\" (UniqueName: \"kubernetes.io/projected/790ad1c5-039c-4993-a342-0383d9c1d881-kube-api-access-nvnsc\") pod \"whisker-b49f7945-25tvh\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " pod="calico-system/whisker-b49f7945-25tvh" Apr 25 00:07:16.329963 kubelet[2527]: I0425 00:07:16.329957 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f40b2e4-ac3e-4645-a45c-301ecaa49eb6-config-volume\") pod \"coredns-7d764666f9-2wggw\" (UID: \"7f40b2e4-ac3e-4645-a45c-301ecaa49eb6\") " pod="kube-system/coredns-7d764666f9-2wggw" Apr 25 00:07:16.349895 systemd[1]: Created slice kubepods-besteffort-pod8a7f5e11_f035_4efb_b64e_0dc54e087c6e.slice - libcontainer container kubepods-besteffort-pod8a7f5e11_f035_4efb_b64e_0dc54e087c6e.slice. Apr 25 00:07:16.355188 systemd[1]: Created slice kubepods-burstable-pod6832a17a_615d_4511_81ef_78e8a9d1028f.slice - libcontainer container kubepods-burstable-pod6832a17a_615d_4511_81ef_78e8a9d1028f.slice. Apr 25 00:07:16.384576 systemd[1]: Started cri-containerd-8c426a186c31dba8e97df7069ef09fa3ad9f227780f987b925330b4d75f051f6.scope - libcontainer container 8c426a186c31dba8e97df7069ef09fa3ad9f227780f987b925330b4d75f051f6. Apr 25 00:07:16.561064 systemd[1]: Created slice kubepods-besteffort-pod01cf6d9e_8d92_40f8_898e_724f0af87eaf.slice - libcontainer container kubepods-besteffort-pod01cf6d9e_8d92_40f8_898e_724f0af87eaf.slice. Apr 25 00:07:16.600845 containerd[1482]: time="2026-04-25T00:07:16.600630211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfm6g,Uid:01cf6d9e-8d92-40f8-898e-724f0af87eaf,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:16.604798 containerd[1482]: time="2026-04-25T00:07:16.604650424Z" level=info msg="StartContainer for \"8c426a186c31dba8e97df7069ef09fa3ad9f227780f987b925330b4d75f051f6\" returns successfully" Apr 25 00:07:16.609238 kubelet[2527]: E0425 00:07:16.609148 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:16.611675 containerd[1482]: time="2026-04-25T00:07:16.611612278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2wggw,Uid:7f40b2e4-ac3e-4645-a45c-301ecaa49eb6,Namespace:kube-system,Attempt:0,}" Apr 25 00:07:16.653175 containerd[1482]: time="2026-04-25T00:07:16.652772584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b49f7945-25tvh,Uid:790ad1c5-039c-4993-a342-0383d9c1d881,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:16.668262 containerd[1482]: time="2026-04-25T00:07:16.668120794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-dkdwn,Uid:32405509-dcbc-4cec-81f7-5a2871f23270,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:16.669392 kubelet[2527]: E0425 00:07:16.668098 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:16.676715 containerd[1482]: time="2026-04-25T00:07:16.675773433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxzlr,Uid:8a7f5e11-f035-4efb-b64e-0dc54e087c6e,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:16.676715 containerd[1482]: time="2026-04-25T00:07:16.676152612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qhqdg,Uid:6832a17a-615d-4511-81ef-78e8a9d1028f,Namespace:kube-system,Attempt:0,}" Apr 25 00:07:16.900896 containerd[1482]: time="2026-04-25T00:07:16.899535242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-6pnn8,Uid:c6c7637f-a019-4d59-9ef0-740af17d1030,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:16.901189 containerd[1482]: time="2026-04-25T00:07:16.900695179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746bc57ccf-bcnv6,Uid:b5fc7c52-af2d-46b0-991d-30bddad7f47f,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:17.207750 containerd[1482]: time="2026-04-25T00:07:17.207341027Z" level=error msg="Failed to destroy network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.208123 containerd[1482]: time="2026-04-25T00:07:17.207782061Z" level=error msg="encountered an error cleaning up failed sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.208123 containerd[1482]: time="2026-04-25T00:07:17.207888413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-dkdwn,Uid:32405509-dcbc-4cec-81f7-5a2871f23270,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.210078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8-shm.mount: Deactivated successfully. Apr 25 00:07:17.219267 kubelet[2527]: E0425 00:07:17.219196 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.219571 kubelet[2527]: E0425 00:07:17.219311 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" Apr 25 00:07:17.219571 kubelet[2527]: E0425 00:07:17.219337 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" Apr 25 00:07:17.219855 kubelet[2527]: E0425 00:07:17.219636 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bfd9bd8c4-dkdwn_calico-system(32405509-dcbc-4cec-81f7-5a2871f23270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bfd9bd8c4-dkdwn_calico-system(32405509-dcbc-4cec-81f7-5a2871f23270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" podUID="32405509-dcbc-4cec-81f7-5a2871f23270" Apr 25 00:07:17.238458 containerd[1482]: time="2026-04-25T00:07:17.238260891Z" level=error msg="Failed to destroy network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.240470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf-shm.mount: Deactivated successfully. Apr 25 00:07:17.242791 containerd[1482]: time="2026-04-25T00:07:17.242715006Z" level=error msg="encountered an error cleaning up failed sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.242861 containerd[1482]: time="2026-04-25T00:07:17.242808171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b49f7945-25tvh,Uid:790ad1c5-039c-4993-a342-0383d9c1d881,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.242998 kubelet[2527]: E0425 00:07:17.242952 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.243041 kubelet[2527]: E0425 00:07:17.243012 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b49f7945-25tvh" Apr 25 00:07:17.243041 kubelet[2527]: E0425 00:07:17.243027 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b49f7945-25tvh" Apr 25 00:07:17.243103 kubelet[2527]: E0425 00:07:17.243070 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b49f7945-25tvh_calico-system(790ad1c5-039c-4993-a342-0383d9c1d881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b49f7945-25tvh_calico-system(790ad1c5-039c-4993-a342-0383d9c1d881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b49f7945-25tvh" podUID="790ad1c5-039c-4993-a342-0383d9c1d881" Apr 25 00:07:17.248144 kubelet[2527]: I0425 00:07:17.247254 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:17.283720 containerd[1482]: time="2026-04-25T00:07:17.283641650Z" level=info msg="StopPodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\"" Apr 25 00:07:17.284851 containerd[1482]: time="2026-04-25T00:07:17.284810493Z" level=info msg="Ensure that sandbox 1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8 in task-service has been cleanup successfully" Apr 25 00:07:17.285075 containerd[1482]: time="2026-04-25T00:07:17.285052257Z" level=error msg="Failed to destroy network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.288847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a-shm.mount: Deactivated successfully. Apr 25 00:07:17.291902 containerd[1482]: time="2026-04-25T00:07:17.291851146Z" level=error msg="encountered an error cleaning up failed sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.292045 containerd[1482]: time="2026-04-25T00:07:17.292029110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2wggw,Uid:7f40b2e4-ac3e-4645-a45c-301ecaa49eb6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.294769 kubelet[2527]: E0425 00:07:17.294041 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.294769 kubelet[2527]: E0425 00:07:17.294147 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-2wggw" Apr 25 00:07:17.294769 kubelet[2527]: E0425 00:07:17.294163 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-2wggw" Apr 25 00:07:17.294912 kubelet[2527]: E0425 00:07:17.294246 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-2wggw_kube-system(7f40b2e4-ac3e-4645-a45c-301ecaa49eb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-2wggw_kube-system(7f40b2e4-ac3e-4645-a45c-301ecaa49eb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-2wggw" podUID="7f40b2e4-ac3e-4645-a45c-301ecaa49eb6" Apr 25 00:07:17.300420 containerd[1482]: time="2026-04-25T00:07:17.300346185Z" level=error msg="Failed to destroy network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.312800 containerd[1482]: time="2026-04-25T00:07:17.311299299Z" level=error msg="encountered an error cleaning up failed sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.311515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29-shm.mount: Deactivated successfully. Apr 25 00:07:17.327202 containerd[1482]: time="2026-04-25T00:07:17.327065152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfm6g,Uid:01cf6d9e-8d92-40f8-898e-724f0af87eaf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.327741 kubelet[2527]: E0425 00:07:17.327695 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.328261 kubelet[2527]: E0425 00:07:17.327890 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:07:17.328261 kubelet[2527]: E0425 00:07:17.327941 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfm6g" Apr 25 00:07:17.328437 kubelet[2527]: E0425 00:07:17.327989 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lfm6g_calico-system(01cf6d9e-8d92-40f8-898e-724f0af87eaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lfm6g_calico-system(01cf6d9e-8d92-40f8-898e-724f0af87eaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfm6g" podUID="01cf6d9e-8d92-40f8-898e-724f0af87eaf" Apr 25 00:07:17.328648 containerd[1482]: time="2026-04-25T00:07:17.326913022Z" level=error msg="Failed to destroy network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.328990 containerd[1482]: time="2026-04-25T00:07:17.324871752Z" level=error msg="Failed to destroy network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.329699 containerd[1482]: time="2026-04-25T00:07:17.329643871Z" level=error msg="encountered an error cleaning up failed sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.329780 containerd[1482]: time="2026-04-25T00:07:17.329717286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746bc57ccf-bcnv6,Uid:b5fc7c52-af2d-46b0-991d-30bddad7f47f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.329952 kubelet[2527]: E0425 00:07:17.329924 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.330095 kubelet[2527]: E0425 00:07:17.329968 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746bc57ccf-bcnv6" Apr 25 00:07:17.330095 kubelet[2527]: E0425 00:07:17.329983 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746bc57ccf-bcnv6" Apr 25 00:07:17.330095 kubelet[2527]: E0425 00:07:17.330022 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746bc57ccf-bcnv6_calico-system(b5fc7c52-af2d-46b0-991d-30bddad7f47f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746bc57ccf-bcnv6_calico-system(b5fc7c52-af2d-46b0-991d-30bddad7f47f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746bc57ccf-bcnv6" podUID="b5fc7c52-af2d-46b0-991d-30bddad7f47f" Apr 25 00:07:17.330329 containerd[1482]: time="2026-04-25T00:07:17.330283074Z" level=error msg="encountered an error cleaning up failed sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.330359 containerd[1482]: time="2026-04-25T00:07:17.330334668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-6pnn8,Uid:c6c7637f-a019-4d59-9ef0-740af17d1030,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.330611 kubelet[2527]: E0425 00:07:17.330586 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.330639 kubelet[2527]: E0425 00:07:17.330626 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bfd9bd8c4-6pnn8" Apr 25 00:07:17.330689 kubelet[2527]: E0425 00:07:17.330641 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bfd9bd8c4-6pnn8" Apr 25 00:07:17.330689 kubelet[2527]: E0425 00:07:17.330673 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bfd9bd8c4-6pnn8_calico-system(c6c7637f-a019-4d59-9ef0-740af17d1030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bfd9bd8c4-6pnn8_calico-system(c6c7637f-a019-4d59-9ef0-740af17d1030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7bfd9bd8c4-6pnn8" podUID="c6c7637f-a019-4d59-9ef0-740af17d1030" Apr 25 00:07:17.330904 containerd[1482]: time="2026-04-25T00:07:17.330881570Z" level=error msg="Failed to destroy network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.331668 containerd[1482]: time="2026-04-25T00:07:17.331648486Z" level=error msg="encountered an error cleaning up failed sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.331703 containerd[1482]: time="2026-04-25T00:07:17.331684501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qhqdg,Uid:6832a17a-615d-4511-81ef-78e8a9d1028f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.331950 kubelet[2527]: E0425 00:07:17.331929 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.332006 kubelet[2527]: E0425 00:07:17.331987 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-qhqdg" Apr 25 00:07:17.332031 kubelet[2527]: E0425 00:07:17.332003 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-qhqdg" Apr 25 00:07:17.332092 kubelet[2527]: E0425 00:07:17.332032 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-qhqdg_kube-system(6832a17a-615d-4511-81ef-78e8a9d1028f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-qhqdg_kube-system(6832a17a-615d-4511-81ef-78e8a9d1028f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-qhqdg" podUID="6832a17a-615d-4511-81ef-78e8a9d1028f" Apr 25 00:07:17.332937 containerd[1482]: time="2026-04-25T00:07:17.332914500Z" level=error msg="Failed to destroy network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.333269 containerd[1482]: time="2026-04-25T00:07:17.333251383Z" level=error msg="encountered an error cleaning up failed sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.333352 containerd[1482]: time="2026-04-25T00:07:17.333338019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxzlr,Uid:8a7f5e11-f035-4efb-b64e-0dc54e087c6e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.333595 kubelet[2527]: E0425 00:07:17.333578 2527 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.333701 kubelet[2527]: E0425 00:07:17.333689 2527 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wxzlr" Apr 25 00:07:17.333792 kubelet[2527]: E0425 00:07:17.333781 2527 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wxzlr" Apr 25 00:07:17.333917 kubelet[2527]: E0425 00:07:17.333899 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-wxzlr_calico-system(8a7f5e11-f035-4efb-b64e-0dc54e087c6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-wxzlr_calico-system(8a7f5e11-f035-4efb-b64e-0dc54e087c6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-wxzlr" podUID="8a7f5e11-f035-4efb-b64e-0dc54e087c6e" Apr 25 00:07:17.372020 containerd[1482]: time="2026-04-25T00:07:17.371811576Z" level=error msg="StopPodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" failed" error="failed to destroy network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:07:17.375224 kubelet[2527]: E0425 00:07:17.374747 2527 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:17.375224 kubelet[2527]: E0425 00:07:17.374951 2527 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8"} Apr 25 00:07:17.375224 kubelet[2527]: E0425 00:07:17.375050 2527 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32405509-dcbc-4cec-81f7-5a2871f23270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:07:17.375224 kubelet[2527]: E0425 00:07:17.375090 2527 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32405509-dcbc-4cec-81f7-5a2871f23270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" podUID="32405509-dcbc-4cec-81f7-5a2871f23270" Apr 25 00:07:17.562426 kubelet[2527]: I0425 00:07:17.562007 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-thkp7" podStartSLOduration=2.570621146 podStartE2EDuration="23.561992779s" podCreationTimestamp="2026-04-25 00:06:54 +0000 UTC" firstStartedPulling="2026-04-25 00:06:55.243488532 +0000 UTC m=+19.920298765" lastFinishedPulling="2026-04-25 00:07:16.234860173 +0000 UTC m=+40.911670398" observedRunningTime="2026-04-25 00:07:17.270921811 +0000 UTC m=+41.947732036" watchObservedRunningTime="2026-04-25 00:07:17.561992779 +0000 UTC m=+42.238803017" Apr 25 00:07:18.101129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2-shm.mount: Deactivated successfully. Apr 25 00:07:18.101279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049-shm.mount: Deactivated successfully. Apr 25 00:07:18.101328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a-shm.mount: Deactivated successfully. Apr 25 00:07:18.101378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87-shm.mount: Deactivated successfully. Apr 25 00:07:18.256143 kubelet[2527]: I0425 00:07:18.255940 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:18.258015 kubelet[2527]: I0425 00:07:18.256791 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:18.258060 containerd[1482]: time="2026-04-25T00:07:18.257312669Z" level=info msg="StopPodSandbox for \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\"" Apr 25 00:07:18.258060 containerd[1482]: time="2026-04-25T00:07:18.257652847Z" level=info msg="StopPodSandbox for \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\"" Apr 25 00:07:18.258060 containerd[1482]: time="2026-04-25T00:07:18.257880822Z" level=info msg="Ensure that sandbox 4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a in task-service has been cleanup successfully" Apr 25 00:07:18.260656 containerd[1482]: time="2026-04-25T00:07:18.258582518Z" level=info msg="Ensure that sandbox f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2 in task-service has been cleanup successfully" Apr 25 00:07:18.260656 containerd[1482]: time="2026-04-25T00:07:18.260045520Z" level=info msg="StopPodSandbox for \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\"" Apr 25 00:07:18.263239 kubelet[2527]: I0425 00:07:18.258886 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:18.263239 kubelet[2527]: I0425 00:07:18.261300 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:18.263304 containerd[1482]: time="2026-04-25T00:07:18.261035045Z" level=info msg="Ensure that sandbox 5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf in task-service has been cleanup successfully" Apr 25 00:07:18.263304 containerd[1482]: time="2026-04-25T00:07:18.261902036Z" level=info msg="StopPodSandbox for \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\"" Apr 25 00:07:18.263304 containerd[1482]: time="2026-04-25T00:07:18.262132232Z" level=info msg="Ensure that sandbox 63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a in task-service has been cleanup successfully" Apr 25 00:07:18.288013 kubelet[2527]: I0425 00:07:18.287272 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:18.296696 containerd[1482]: time="2026-04-25T00:07:18.296496381Z" level=info msg="StopPodSandbox for \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\"" Apr 25 00:07:18.296696 containerd[1482]: time="2026-04-25T00:07:18.296847899Z" level=info msg="Ensure that sandbox 0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049 in task-service has been cleanup successfully" Apr 25 00:07:18.310129 kubelet[2527]: I0425 00:07:18.305441 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:18.325777 containerd[1482]: time="2026-04-25T00:07:18.324851840Z" level=info msg="StopPodSandbox for \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\"" Apr 25 00:07:18.325777 containerd[1482]: time="2026-04-25T00:07:18.325360497Z" level=info msg="Ensure that sandbox 0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29 in task-service has been cleanup successfully" Apr 25 00:07:18.338840 kubelet[2527]: I0425 00:07:18.338630 2527 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:18.342271 containerd[1482]: time="2026-04-25T00:07:18.341890365Z" level=info msg="StopPodSandbox for \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\"" Apr 25 00:07:18.342271 containerd[1482]: time="2026-04-25T00:07:18.342089645Z" level=info msg="Ensure that sandbox 195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87 in task-service has been cleanup successfully" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.602 [INFO][3884] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.605 [INFO][3884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" iface="eth0" netns="/var/run/netns/cni-62f7ef6e-d03a-4448-e04c-5a6e5c9d720f" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.605 [INFO][3884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" iface="eth0" netns="/var/run/netns/cni-62f7ef6e-d03a-4448-e04c-5a6e5c9d720f" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.610 [INFO][3884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" iface="eth0" netns="/var/run/netns/cni-62f7ef6e-d03a-4448-e04c-5a6e5c9d720f" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.610 [INFO][3884] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.610 [INFO][3884] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.807 [INFO][3951] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.807 [INFO][3951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.807 [INFO][3951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.829 [WARNING][3951] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.829 [INFO][3951] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.842 [INFO][3951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:18.864668 containerd[1482]: 2026-04-25 00:07:18.859 [INFO][3884] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:18.864668 containerd[1482]: time="2026-04-25T00:07:18.864313952Z" level=info msg="TearDown network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\" successfully" Apr 25 00:07:18.864668 containerd[1482]: time="2026-04-25T00:07:18.864540449Z" level=info msg="StopPodSandbox for \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\" returns successfully" Apr 25 00:07:18.869293 systemd[1]: run-netns-cni\x2d62f7ef6e\x2dd03a\x2d4448\x2de04c\x2d5a6e5c9d720f.mount: Deactivated successfully. Apr 25 00:07:18.890286 containerd[1482]: time="2026-04-25T00:07:18.889988332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746bc57ccf-bcnv6,Uid:b5fc7c52-af2d-46b0-991d-30bddad7f47f,Namespace:calico-system,Attempt:1,}" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.658 [INFO][3879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.661 [INFO][3879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" iface="eth0" netns="/var/run/netns/cni-b2919b39-09d4-1e40-1085-fbbd68a3f4fc" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.662 [INFO][3879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" iface="eth0" netns="/var/run/netns/cni-b2919b39-09d4-1e40-1085-fbbd68a3f4fc" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.674 [INFO][3879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" iface="eth0" netns="/var/run/netns/cni-b2919b39-09d4-1e40-1085-fbbd68a3f4fc" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.674 [INFO][3879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.674 [INFO][3879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.852 [INFO][3976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.856 [INFO][3976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.856 [INFO][3976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.873 [WARNING][3976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.874 [INFO][3976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.878 [INFO][3976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:18.893855 containerd[1482]: 2026-04-25 00:07:18.885 [INFO][3879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:18.896764 containerd[1482]: time="2026-04-25T00:07:18.894107090Z" level=info msg="TearDown network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\" successfully" Apr 25 00:07:18.896764 containerd[1482]: time="2026-04-25T00:07:18.894125109Z" level=info msg="StopPodSandbox for \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\" returns successfully" Apr 25 00:07:18.902529 kubelet[2527]: E0425 00:07:18.902312 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:18.903788 systemd[1]: run-netns-cni\x2db2919b39\x2d09d4\x2d1e40\x2d1085\x2dfbbd68a3f4fc.mount: Deactivated successfully. Apr 25 00:07:18.907445 containerd[1482]: time="2026-04-25T00:07:18.907358026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2wggw,Uid:7f40b2e4-ac3e-4645-a45c-301ecaa49eb6,Namespace:kube-system,Attempt:1,}" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.613 [INFO][3880] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.613 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" iface="eth0" netns="/var/run/netns/cni-aa4769da-293f-fb10-07ca-d2beabe63438" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.614 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" iface="eth0" netns="/var/run/netns/cni-aa4769da-293f-fb10-07ca-d2beabe63438" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.614 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" iface="eth0" netns="/var/run/netns/cni-aa4769da-293f-fb10-07ca-d2beabe63438" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.614 [INFO][3880] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.614 [INFO][3880] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.852 [INFO][3953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.856 [INFO][3953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.878 [INFO][3953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.904 [WARNING][3953] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.904 [INFO][3953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.911 [INFO][3953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:18.921831 containerd[1482]: 2026-04-25 00:07:18.919 [INFO][3880] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:18.924604 containerd[1482]: time="2026-04-25T00:07:18.924556517Z" level=info msg="TearDown network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\" successfully" Apr 25 00:07:18.924665 containerd[1482]: time="2026-04-25T00:07:18.924654382Z" level=info msg="StopPodSandbox for \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\" returns successfully" Apr 25 00:07:18.924760 systemd[1]: run-netns-cni\x2daa4769da\x2d293f\x2dfb10\x2d07ca\x2dd2beabe63438.mount: Deactivated successfully. Apr 25 00:07:18.938430 containerd[1482]: time="2026-04-25T00:07:18.938347329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-6pnn8,Uid:c6c7637f-a019-4d59-9ef0-740af17d1030,Namespace:calico-system,Attempt:1,}" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.684 [INFO][3906] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.687 [INFO][3906] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" iface="eth0" netns="/var/run/netns/cni-4edb0e39-6401-c2d2-3b62-fbd643d515a3" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.687 [INFO][3906] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" iface="eth0" netns="/var/run/netns/cni-4edb0e39-6401-c2d2-3b62-fbd643d515a3" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.694 [INFO][3906] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" iface="eth0" netns="/var/run/netns/cni-4edb0e39-6401-c2d2-3b62-fbd643d515a3" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.715 [INFO][3906] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.716 [INFO][3906] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.856 [INFO][3986] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.856 [INFO][3986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.913 [INFO][3986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.935 [WARNING][3986] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.935 [INFO][3986] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.939 [INFO][3986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:18.964577 containerd[1482]: 2026-04-25 00:07:18.951 [INFO][3906] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:18.966667 containerd[1482]: time="2026-04-25T00:07:18.966185189Z" level=info msg="TearDown network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\" successfully" Apr 25 00:07:18.966667 containerd[1482]: time="2026-04-25T00:07:18.966603473Z" level=info msg="StopPodSandbox for \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\" returns successfully" Apr 25 00:07:18.982959 containerd[1482]: time="2026-04-25T00:07:18.982834516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfm6g,Uid:01cf6d9e-8d92-40f8-898e-724f0af87eaf,Namespace:calico-system,Attempt:1,}" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.659 [INFO][3848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.660 [INFO][3848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" iface="eth0" netns="/var/run/netns/cni-3bec52b9-cf08-1064-dba7-1947eb0fb774" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.660 [INFO][3848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" iface="eth0" netns="/var/run/netns/cni-3bec52b9-cf08-1064-dba7-1947eb0fb774" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.661 [INFO][3848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" iface="eth0" netns="/var/run/netns/cni-3bec52b9-cf08-1064-dba7-1947eb0fb774" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.661 [INFO][3848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.661 [INFO][3848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.863 [INFO][3974] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.863 [INFO][3974] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.939 [INFO][3974] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.970 [WARNING][3974] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.970 [INFO][3974] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.987 [INFO][3974] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:19.010015 containerd[1482]: 2026-04-25 00:07:18.999 [INFO][3848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:19.019758 containerd[1482]: time="2026-04-25T00:07:19.018550453Z" level=info msg="TearDown network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\" successfully" Apr 25 00:07:19.019758 containerd[1482]: time="2026-04-25T00:07:19.018578079Z" level=info msg="StopPodSandbox for \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\" returns successfully" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.763 [INFO][3920] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.764 [INFO][3920] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" iface="eth0" netns="/var/run/netns/cni-72f5528c-d088-a3d5-0954-4a80480972bd" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.771 [INFO][3920] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" iface="eth0" netns="/var/run/netns/cni-72f5528c-d088-a3d5-0954-4a80480972bd" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.772 [INFO][3920] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" iface="eth0" netns="/var/run/netns/cni-72f5528c-d088-a3d5-0954-4a80480972bd" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.772 [INFO][3920] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.772 [INFO][3920] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.880 [INFO][3994] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.883 [INFO][3994] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:18.988 [INFO][3994] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:19.024 [WARNING][3994] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:19.025 [INFO][3994] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:19.041 [INFO][3994] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:19.072827 containerd[1482]: 2026-04-25 00:07:19.062 [INFO][3920] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:19.073760 containerd[1482]: time="2026-04-25T00:07:19.073704217Z" level=info msg="TearDown network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\" successfully" Apr 25 00:07:19.073943 containerd[1482]: time="2026-04-25T00:07:19.073823364Z" level=info msg="StopPodSandbox for \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\" returns successfully" Apr 25 00:07:19.085463 kubelet[2527]: E0425 00:07:19.079612 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:19.092879 containerd[1482]: time="2026-04-25T00:07:19.092827854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qhqdg,Uid:6832a17a-615d-4511-81ef-78e8a9d1028f,Namespace:kube-system,Attempt:1,}" Apr 25 00:07:19.115044 systemd[1]: run-netns-cni\x2d72f5528c\x2dd088\x2da3d5\x2d0954\x2d4a80480972bd.mount: Deactivated successfully. Apr 25 00:07:19.115110 systemd[1]: run-netns-cni\x2d3bec52b9\x2dcf08\x2d1064\x2ddba7\x2d1947eb0fb774.mount: Deactivated successfully. Apr 25 00:07:19.115146 systemd[1]: run-netns-cni\x2d4edb0e39\x2d6401\x2dc2d2\x2d3b62\x2dfbd643d515a3.mount: Deactivated successfully. Apr 25 00:07:19.132493 kubelet[2527]: I0425 00:07:19.131897 2527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-ca-bundle\") pod \"790ad1c5-039c-4993-a342-0383d9c1d881\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " Apr 25 00:07:19.132493 kubelet[2527]: I0425 00:07:19.131959 2527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-nginx-config\" (UniqueName: \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-nginx-config\") pod \"790ad1c5-039c-4993-a342-0383d9c1d881\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " Apr 25 00:07:19.132493 kubelet[2527]: I0425 00:07:19.132027 2527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-backend-key-pair\") pod \"790ad1c5-039c-4993-a342-0383d9c1d881\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " Apr 25 00:07:19.132493 kubelet[2527]: I0425 00:07:19.132046 2527 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/790ad1c5-039c-4993-a342-0383d9c1d881-kube-api-access-nvnsc\" (UniqueName: \"kubernetes.io/projected/790ad1c5-039c-4993-a342-0383d9c1d881-kube-api-access-nvnsc\") pod \"790ad1c5-039c-4993-a342-0383d9c1d881\" (UID: \"790ad1c5-039c-4993-a342-0383d9c1d881\") " Apr 25 00:07:19.134057 kubelet[2527]: I0425 00:07:19.133714 2527 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-ca-bundle" pod "790ad1c5-039c-4993-a342-0383d9c1d881" (UID: "790ad1c5-039c-4993-a342-0383d9c1d881"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.720 [INFO][3833] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.723 [INFO][3833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" iface="eth0" netns="/var/run/netns/cni-c9589d8a-5118-2539-e7c6-aa5a8b2d4fd9" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.769 [INFO][3833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" iface="eth0" netns="/var/run/netns/cni-c9589d8a-5118-2539-e7c6-aa5a8b2d4fd9" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.773 [INFO][3833] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" iface="eth0" netns="/var/run/netns/cni-c9589d8a-5118-2539-e7c6-aa5a8b2d4fd9" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.773 [INFO][3833] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.773 [INFO][3833] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.886 [INFO][4001] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:18.887 [INFO][4001] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:19.033 [INFO][4001] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:19.074 [WARNING][4001] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:19.074 [INFO][4001] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:19.076 [INFO][4001] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:19.134139 containerd[1482]: 2026-04-25 00:07:19.116 [INFO][3833] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:19.136803 containerd[1482]: time="2026-04-25T00:07:19.136237988Z" level=info msg="TearDown network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\" successfully" Apr 25 00:07:19.136803 containerd[1482]: time="2026-04-25T00:07:19.136285573Z" level=info msg="StopPodSandbox for \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\" returns successfully" Apr 25 00:07:19.137953 kubelet[2527]: I0425 00:07:19.137770 2527 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-nginx-config" pod "790ad1c5-039c-4993-a342-0383d9c1d881" (UID: "790ad1c5-039c-4993-a342-0383d9c1d881"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:07:19.138443 systemd[1]: run-netns-cni\x2dc9589d8a\x2d5118\x2d2539\x2de7c6\x2daa5a8b2d4fd9.mount: Deactivated successfully. Apr 25 00:07:19.146519 containerd[1482]: time="2026-04-25T00:07:19.146473117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxzlr,Uid:8a7f5e11-f035-4efb-b64e-0dc54e087c6e,Namespace:calico-system,Attempt:1,}" Apr 25 00:07:19.146890 systemd[1]: var-lib-kubelet-pods-790ad1c5\x2d039c\x2d4993\x2da342\x2d0383d9c1d881-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvnsc.mount: Deactivated successfully. Apr 25 00:07:19.152044 kubelet[2527]: I0425 00:07:19.147330 2527 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790ad1c5-039c-4993-a342-0383d9c1d881-kube-api-access-nvnsc" pod "790ad1c5-039c-4993-a342-0383d9c1d881" (UID: "790ad1c5-039c-4993-a342-0383d9c1d881"). InnerVolumeSpecName "kube-api-access-nvnsc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:07:19.147022 systemd[1]: var-lib-kubelet-pods-790ad1c5\x2d039c\x2d4993\x2da342\x2d0383d9c1d881-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 25 00:07:19.156108 kubelet[2527]: I0425 00:07:19.156041 2527 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-backend-key-pair" pod "790ad1c5-039c-4993-a342-0383d9c1d881" (UID: "790ad1c5-039c-4993-a342-0383d9c1d881"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 25 00:07:19.276232 kubelet[2527]: I0425 00:07:19.275938 2527 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 25 00:07:19.276232 kubelet[2527]: I0425 00:07:19.276100 2527 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvnsc\" (UniqueName: \"kubernetes.io/projected/790ad1c5-039c-4993-a342-0383d9c1d881-kube-api-access-nvnsc\") on node \"localhost\" DevicePath \"\"" Apr 25 00:07:19.276232 kubelet[2527]: I0425 00:07:19.276110 2527 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 25 00:07:19.276232 kubelet[2527]: I0425 00:07:19.276116 2527 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/790ad1c5-039c-4993-a342-0383d9c1d881-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 25 00:07:19.365339 systemd[1]: Removed slice kubepods-besteffort-pod790ad1c5_039c_4993_a342_0383d9c1d881.slice - libcontainer container kubepods-besteffort-pod790ad1c5_039c_4993_a342_0383d9c1d881.slice. Apr 25 00:07:19.760090 systemd[1]: Created slice kubepods-besteffort-podc228283b_be94_46da_acfa_f2f433bacb39.slice - libcontainer container kubepods-besteffort-podc228283b_be94_46da_acfa_f2f433bacb39.slice. Apr 25 00:07:19.917964 kubelet[2527]: I0425 00:07:19.917792 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c228283b-be94-46da-acfa-f2f433bacb39-whisker-backend-key-pair\") pod \"whisker-7456c955c4-ctk9r\" (UID: \"c228283b-be94-46da-acfa-f2f433bacb39\") " pod="calico-system/whisker-7456c955c4-ctk9r" Apr 25 00:07:19.917964 kubelet[2527]: I0425 00:07:19.917876 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c228283b-be94-46da-acfa-f2f433bacb39-whisker-ca-bundle\") pod \"whisker-7456c955c4-ctk9r\" (UID: \"c228283b-be94-46da-acfa-f2f433bacb39\") " pod="calico-system/whisker-7456c955c4-ctk9r" Apr 25 00:07:19.917964 kubelet[2527]: I0425 00:07:19.917905 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c228283b-be94-46da-acfa-f2f433bacb39-nginx-config\") pod \"whisker-7456c955c4-ctk9r\" (UID: \"c228283b-be94-46da-acfa-f2f433bacb39\") " pod="calico-system/whisker-7456c955c4-ctk9r" Apr 25 00:07:19.917964 kubelet[2527]: I0425 00:07:19.917924 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ms5c\" (UniqueName: \"kubernetes.io/projected/c228283b-be94-46da-acfa-f2f433bacb39-kube-api-access-8ms5c\") pod \"whisker-7456c955c4-ctk9r\" (UID: \"c228283b-be94-46da-acfa-f2f433bacb39\") " pod="calico-system/whisker-7456c955c4-ctk9r" Apr 25 00:07:20.105438 containerd[1482]: time="2026-04-25T00:07:20.102298422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7456c955c4-ctk9r,Uid:c228283b-be94-46da-acfa-f2f433bacb39,Namespace:calico-system,Attempt:0,}" Apr 25 00:07:20.142826 systemd-networkd[1400]: cali323ccba9519: Link UP Apr 25 00:07:20.164791 systemd-networkd[1400]: cali323ccba9519: Gained carrier Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.040 [ERROR][4033] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.052 [INFO][4033] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--2wggw-eth0 coredns-7d764666f9- kube-system 7f40b2e4-ac3e-4645-a45c-301ecaa49eb6 924 0 2026-04-25 00:06:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-2wggw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali323ccba9519 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.052 [INFO][4033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.341 [INFO][4158] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" HandleID="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.376 [INFO][4158] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" HandleID="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e80a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-2wggw", "timestamp":"2026-04-25 00:07:19.341538652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186dc0)} Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.376 [INFO][4158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.376 [INFO][4158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.377 [INFO][4158] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.400 [INFO][4158] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.544 [INFO][4158] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.661 [INFO][4158] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.671 [INFO][4158] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.748 [INFO][4158] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.749 [INFO][4158] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.807 [INFO][4158] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:19.893 [INFO][4158] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:20.015 [INFO][4158] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:20.036 [INFO][4158] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" host="localhost" Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:20.037 [INFO][4158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:20.327537 containerd[1482]: 2026-04-25 00:07:20.037 [INFO][4158] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" HandleID="k8s-pod-network.9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.328111 containerd[1482]: 2026-04-25 00:07:20.053 [INFO][4033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--2wggw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f40b2e4-ac3e-4645-a45c-301ecaa49eb6", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-2wggw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali323ccba9519", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.328111 containerd[1482]: 2026-04-25 00:07:20.053 [INFO][4033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.328111 containerd[1482]: 2026-04-25 00:07:20.053 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali323ccba9519 ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.328111 containerd[1482]: 2026-04-25 00:07:20.250 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.328111 containerd[1482]: 2026-04-25 00:07:20.264 [INFO][4033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--2wggw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f40b2e4-ac3e-4645-a45c-301ecaa49eb6", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce", Pod:"coredns-7d764666f9-2wggw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali323ccba9519", MAC:"f2:6e:b7:92:d2:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.328111 containerd[1482]: 2026-04-25 00:07:20.322 [INFO][4033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce" Namespace="kube-system" Pod="coredns-7d764666f9-2wggw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:20.415735 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:35916.service - OpenSSH per-connection server daemon (10.0.0.1:35916). Apr 25 00:07:20.488038 containerd[1482]: time="2026-04-25T00:07:20.415941401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:20.488038 containerd[1482]: time="2026-04-25T00:07:20.416105560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:20.488038 containerd[1482]: time="2026-04-25T00:07:20.416129423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:20.488038 containerd[1482]: time="2026-04-25T00:07:20.416292005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:20.570535 systemd[1]: Started cri-containerd-9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce.scope - libcontainer container 9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce. Apr 25 00:07:20.574979 systemd-networkd[1400]: calie9cf27fa8e1: Link UP Apr 25 00:07:20.575871 systemd-networkd[1400]: calie9cf27fa8e1: Gained carrier Apr 25 00:07:20.590444 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 35916 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:20.595363 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:20.601773 systemd-logind[1458]: New session 8 of user core. Apr 25 00:07:20.604529 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:19.072 [ERROR][4018] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:19.114 [INFO][4018] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0 calico-kube-controllers-746bc57ccf- calico-system b5fc7c52-af2d-46b0-991d-30bddad7f47f 921 0 2026-04-25 00:06:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:746bc57ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-746bc57ccf-bcnv6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie9cf27fa8e1 [] [] }} ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:19.114 [INFO][4018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:19.361 [INFO][4177] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" HandleID="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:19.383 [INFO][4177] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" HandleID="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-746bc57ccf-bcnv6", "timestamp":"2026-04-25 00:07:19.361865291 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003aa580)} Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:19.386 [INFO][4177] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.038 [INFO][4177] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.038 [INFO][4177] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.076 [INFO][4177] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.180 [INFO][4177] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.279 [INFO][4177] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.322 [INFO][4177] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.335 [INFO][4177] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.335 [INFO][4177] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.340 [INFO][4177] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0 Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.384 [INFO][4177] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.517 [INFO][4177] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.517 [INFO][4177] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" host="localhost" Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.530 [INFO][4177] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:20.605464 containerd[1482]: 2026-04-25 00:07:20.531 [INFO][4177] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" HandleID="k8s-pod-network.03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.605939 containerd[1482]: 2026-04-25 00:07:20.562 [INFO][4018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0", GenerateName:"calico-kube-controllers-746bc57ccf-", Namespace:"calico-system", SelfLink:"", UID:"b5fc7c52-af2d-46b0-991d-30bddad7f47f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746bc57ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-746bc57ccf-bcnv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9cf27fa8e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.605939 containerd[1482]: 2026-04-25 00:07:20.563 [INFO][4018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.605939 containerd[1482]: 2026-04-25 00:07:20.564 [INFO][4018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9cf27fa8e1 ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.605939 containerd[1482]: 2026-04-25 00:07:20.577 [INFO][4018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.605939 containerd[1482]: 2026-04-25 00:07:20.578 [INFO][4018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0", GenerateName:"calico-kube-controllers-746bc57ccf-", Namespace:"calico-system", SelfLink:"", UID:"b5fc7c52-af2d-46b0-991d-30bddad7f47f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746bc57ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0", Pod:"calico-kube-controllers-746bc57ccf-bcnv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9cf27fa8e1", MAC:"2e:9b:4c:86:75:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.605939 containerd[1482]: 2026-04-25 00:07:20.602 [INFO][4018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0" Namespace="calico-system" Pod="calico-kube-controllers-746bc57ccf-bcnv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:20.622543 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:20.675707 containerd[1482]: time="2026-04-25T00:07:20.672302807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:20.683250 systemd-networkd[1400]: cali32a85d38c0d: Link UP Apr 25 00:07:20.684006 containerd[1482]: time="2026-04-25T00:07:20.683589383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:20.684108 systemd-networkd[1400]: cali32a85d38c0d: Gained carrier Apr 25 00:07:20.691526 containerd[1482]: time="2026-04-25T00:07:20.691480429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2wggw,Uid:7f40b2e4-ac3e-4645-a45c-301ecaa49eb6,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce\"" Apr 25 00:07:20.694284 containerd[1482]: time="2026-04-25T00:07:20.691817726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:20.694284 containerd[1482]: time="2026-04-25T00:07:20.693610258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:20.695472 kubelet[2527]: E0425 00:07:20.694941 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:20.709245 containerd[1482]: time="2026-04-25T00:07:20.709163896Z" level=info msg="CreateContainer within sandbox \"9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:07:20.726210 systemd[1]: Started cri-containerd-03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0.scope - libcontainer container 03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0. Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:19.160 [ERROR][4129] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:19.186 [INFO][4129] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lfm6g-eth0 csi-node-driver- calico-system 01cf6d9e-8d92-40f8-898e-724f0af87eaf 925 0 2026-04-25 00:06:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lfm6g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali32a85d38c0d [] [] }} ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:19.187 [INFO][4129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:19.399 [INFO][4202] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" HandleID="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:19.497 [INFO][4202] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" HandleID="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037df50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lfm6g", "timestamp":"2026-04-25 00:07:19.399698998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000c31e0)} Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:19.498 [INFO][4202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.519 [INFO][4202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.519 [INFO][4202] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.559 [INFO][4202] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.572 [INFO][4202] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.607 [INFO][4202] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.611 [INFO][4202] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.627 [INFO][4202] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.627 [INFO][4202] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.630 [INFO][4202] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.636 [INFO][4202] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.642 [INFO][4202] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.643 [INFO][4202] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" host="localhost" Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.643 [INFO][4202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:20.735303 containerd[1482]: 2026-04-25 00:07:20.643 [INFO][4202] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" HandleID="k8s-pod-network.3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.742223 containerd[1482]: 2026-04-25 00:07:20.656 [INFO][4129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lfm6g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"01cf6d9e-8d92-40f8-898e-724f0af87eaf", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lfm6g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32a85d38c0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.742223 containerd[1482]: 2026-04-25 00:07:20.656 [INFO][4129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.742223 containerd[1482]: 2026-04-25 00:07:20.656 [INFO][4129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32a85d38c0d ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.742223 containerd[1482]: 2026-04-25 00:07:20.683 [INFO][4129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.742223 containerd[1482]: 2026-04-25 00:07:20.690 [INFO][4129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lfm6g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"01cf6d9e-8d92-40f8-898e-724f0af87eaf", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac", Pod:"csi-node-driver-lfm6g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32a85d38c0d", MAC:"96:ad:b3:1c:68:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.742223 containerd[1482]: 2026-04-25 00:07:20.721 [INFO][4129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac" Namespace="calico-system" Pod="csi-node-driver-lfm6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:20.754508 containerd[1482]: time="2026-04-25T00:07:20.754351943Z" level=info msg="CreateContainer within sandbox \"9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1dd01dfb2e915de41e036631a6e7c4e252d5c30a0de92ba64f23b90dd105ae08\"" Apr 25 00:07:20.758642 containerd[1482]: time="2026-04-25T00:07:20.758069543Z" level=info msg="StartContainer for \"1dd01dfb2e915de41e036631a6e7c4e252d5c30a0de92ba64f23b90dd105ae08\"" Apr 25 00:07:20.775766 containerd[1482]: time="2026-04-25T00:07:20.775463455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:20.775766 containerd[1482]: time="2026-04-25T00:07:20.775510572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:20.775766 containerd[1482]: time="2026-04-25T00:07:20.775522777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:20.775766 containerd[1482]: time="2026-04-25T00:07:20.775594339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:20.793538 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:20.805698 systemd-networkd[1400]: calic20ea658450: Link UP Apr 25 00:07:20.806487 systemd-networkd[1400]: calic20ea658450: Gained carrier Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:19.218 [ERROR][4173] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:19.297 [INFO][4173] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--qhqdg-eth0 coredns-7d764666f9- kube-system 6832a17a-615d-4511-81ef-78e8a9d1028f 927 0 2026-04-25 00:06:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-qhqdg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic20ea658450 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:19.297 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:19.829 [INFO][4211] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" HandleID="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:19.926 [INFO][4211] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" HandleID="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-qhqdg", "timestamp":"2026-04-25 00:07:19.828997553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00068c420)} Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:19.927 [INFO][4211] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.643 [INFO][4211] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.644 [INFO][4211] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.659 [INFO][4211] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.744 [INFO][4211] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.762 [INFO][4211] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.764 [INFO][4211] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.767 [INFO][4211] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.767 [INFO][4211] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.770 [INFO][4211] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48 Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.776 [INFO][4211] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.791 [INFO][4211] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.792 [INFO][4211] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" host="localhost" Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.792 [INFO][4211] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:20.823470 containerd[1482]: 2026-04-25 00:07:20.792 [INFO][4211] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" HandleID="k8s-pod-network.7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.824588 containerd[1482]: 2026-04-25 00:07:20.800 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--qhqdg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6832a17a-615d-4511-81ef-78e8a9d1028f", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-qhqdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic20ea658450", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.824588 containerd[1482]: 2026-04-25 00:07:20.801 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.824588 containerd[1482]: 2026-04-25 00:07:20.801 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic20ea658450 ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.824588 containerd[1482]: 2026-04-25 00:07:20.807 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.824588 containerd[1482]: 2026-04-25 00:07:20.807 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--qhqdg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6832a17a-615d-4511-81ef-78e8a9d1028f", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48", Pod:"coredns-7d764666f9-qhqdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic20ea658450", MAC:"26:21:dd:57:2a:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:20.824588 containerd[1482]: 2026-04-25 00:07:20.816 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48" Namespace="kube-system" Pod="coredns-7d764666f9-qhqdg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:20.824277 systemd[1]: Started cri-containerd-3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac.scope - libcontainer container 3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac. Apr 25 00:07:20.834567 kernel: calico-node[4228]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 25 00:07:20.850562 systemd[1]: Started cri-containerd-1dd01dfb2e915de41e036631a6e7c4e252d5c30a0de92ba64f23b90dd105ae08.scope - libcontainer container 1dd01dfb2e915de41e036631a6e7c4e252d5c30a0de92ba64f23b90dd105ae08. Apr 25 00:07:20.865349 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:20.967936 containerd[1482]: time="2026-04-25T00:07:20.966272704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746bc57ccf-bcnv6,Uid:b5fc7c52-af2d-46b0-991d-30bddad7f47f,Namespace:calico-system,Attempt:1,} returns sandbox id \"03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0\"" Apr 25 00:07:20.973237 sshd[4312]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:21.051962 containerd[1482]: time="2026-04-25T00:07:21.051509402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 25 00:07:21.053135 containerd[1482]: time="2026-04-25T00:07:21.052705870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:21.053135 containerd[1482]: time="2026-04-25T00:07:21.052938705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:21.053135 containerd[1482]: time="2026-04-25T00:07:21.052951234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:21.055068 containerd[1482]: time="2026-04-25T00:07:21.054990002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:21.069680 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:35916.service: Deactivated successfully. Apr 25 00:07:21.076482 systemd[1]: session-8.scope: Deactivated successfully. Apr 25 00:07:21.082113 containerd[1482]: time="2026-04-25T00:07:21.080910273Z" level=info msg="StartContainer for \"1dd01dfb2e915de41e036631a6e7c4e252d5c30a0de92ba64f23b90dd105ae08\" returns successfully" Apr 25 00:07:21.090234 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Apr 25 00:07:21.094640 systemd-logind[1458]: Removed session 8. Apr 25 00:07:21.131382 containerd[1482]: time="2026-04-25T00:07:21.130673412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfm6g,Uid:01cf6d9e-8d92-40f8-898e-724f0af87eaf,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac\"" Apr 25 00:07:21.195129 systemd[1]: Started cri-containerd-7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48.scope - libcontainer container 7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48. Apr 25 00:07:21.252537 systemd-networkd[1400]: cali323ccba9519: Gained IPv6LL Apr 25 00:07:21.278162 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:21.334150 systemd-networkd[1400]: calif09a82f870c: Link UP Apr 25 00:07:21.334979 systemd-networkd[1400]: calif09a82f870c: Gained carrier Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:19.324 [ERROR][4138] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:19.388 [INFO][4138] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0 calico-apiserver-7bfd9bd8c4- calico-system c6c7637f-a019-4d59-9ef0-740af17d1030 922 0 2026-04-25 00:06:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bfd9bd8c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bfd9bd8c4-6pnn8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif09a82f870c [] [] }} ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:19.389 [INFO][4138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.022 [INFO][4245] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" HandleID="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.076 [INFO][4245] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" HandleID="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d83c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7bfd9bd8c4-6pnn8", "timestamp":"2026-04-25 00:07:20.022721924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000338420)} Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.076 [INFO][4245] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.792 [INFO][4245] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.792 [INFO][4245] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.800 [INFO][4245] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.852 [INFO][4245] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:20.968 [INFO][4245] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.031 [INFO][4245] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.037 [INFO][4245] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.037 [INFO][4245] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.039 [INFO][4245] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.189 [INFO][4245] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.272 [INFO][4245] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.272 [INFO][4245] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" host="localhost" Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.272 [INFO][4245] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:21.404263 containerd[1482]: 2026-04-25 00:07:21.272 [INFO][4245] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" HandleID="k8s-pod-network.deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.407160 containerd[1482]: 2026-04-25 00:07:21.285 [INFO][4138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"c6c7637f-a019-4d59-9ef0-740af17d1030", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bfd9bd8c4-6pnn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif09a82f870c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:21.407160 containerd[1482]: 2026-04-25 00:07:21.312 [INFO][4138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.407160 containerd[1482]: 2026-04-25 00:07:21.324 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif09a82f870c ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.407160 containerd[1482]: 2026-04-25 00:07:21.333 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.407160 containerd[1482]: 2026-04-25 00:07:21.334 [INFO][4138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"c6c7637f-a019-4d59-9ef0-740af17d1030", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec", Pod:"calico-apiserver-7bfd9bd8c4-6pnn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif09a82f870c", MAC:"ba:b0:76:36:9f:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:21.407160 containerd[1482]: 2026-04-25 00:07:21.395 [INFO][4138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-6pnn8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:21.419641 kubelet[2527]: E0425 00:07:21.419543 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:21.440653 containerd[1482]: time="2026-04-25T00:07:21.440446923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qhqdg,Uid:6832a17a-615d-4511-81ef-78e8a9d1028f,Namespace:kube-system,Attempt:1,} returns sandbox id \"7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48\"" Apr 25 00:07:21.442881 kubelet[2527]: E0425 00:07:21.441641 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:21.456687 containerd[1482]: time="2026-04-25T00:07:21.453909328Z" level=info msg="CreateContainer within sandbox \"7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:07:21.489580 kubelet[2527]: I0425 00:07:21.489099 2527 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="790ad1c5-039c-4993-a342-0383d9c1d881" path="/var/lib/kubelet/pods/790ad1c5-039c-4993-a342-0383d9c1d881/volumes" Apr 25 00:07:21.499222 kubelet[2527]: I0425 00:07:21.498723 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-2wggw" podStartSLOduration=39.49865816 podStartE2EDuration="39.49865816s" podCreationTimestamp="2026-04-25 00:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:07:21.468827551 +0000 UTC m=+46.145637781" watchObservedRunningTime="2026-04-25 00:07:21.49865816 +0000 UTC m=+46.175468385" Apr 25 00:07:21.561686 containerd[1482]: time="2026-04-25T00:07:21.561556580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:21.561987 containerd[1482]: time="2026-04-25T00:07:21.561701137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:21.561987 containerd[1482]: time="2026-04-25T00:07:21.561714325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:21.561987 containerd[1482]: time="2026-04-25T00:07:21.561824198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:21.585157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730713068.mount: Deactivated successfully. Apr 25 00:07:21.612224 containerd[1482]: time="2026-04-25T00:07:21.611910001Z" level=info msg="CreateContainer within sandbox \"7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b85e740a02b1a994a6d7030745161f5051ca6a87207c4c0131584b8c8d6e0487\"" Apr 25 00:07:21.623103 systemd[1]: Started cri-containerd-deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec.scope - libcontainer container deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec. Apr 25 00:07:21.624165 containerd[1482]: time="2026-04-25T00:07:21.624050012Z" level=info msg="StartContainer for \"b85e740a02b1a994a6d7030745161f5051ca6a87207c4c0131584b8c8d6e0487\"" Apr 25 00:07:21.668474 systemd-networkd[1400]: calie790da6c259: Link UP Apr 25 00:07:21.670186 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:21.671830 systemd-networkd[1400]: calie790da6c259: Gained carrier Apr 25 00:07:21.700664 systemd-networkd[1400]: cali32a85d38c0d: Gained IPv6LL Apr 25 00:07:21.724576 systemd[1]: Started cri-containerd-b85e740a02b1a994a6d7030745161f5051ca6a87207c4c0131584b8c8d6e0487.scope - libcontainer container b85e740a02b1a994a6d7030745161f5051ca6a87207c4c0131584b8c8d6e0487. Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:19.586 [ERROR][4219] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:19.922 [INFO][4219] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0 goldmane-9f7667bb8- calico-system 8a7f5e11-f035-4efb-b64e-0dc54e087c6e 926 0 2026-04-25 00:06:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-wxzlr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie790da6c259 [] [] }} ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:19.923 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:20.282 [INFO][4261] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" HandleID="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:20.343 [INFO][4261] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" HandleID="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006805a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-wxzlr", "timestamp":"2026-04-25 00:07:20.282256165 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00020bb80)} Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:20.360 [INFO][4261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.273 [INFO][4261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.273 [INFO][4261] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.328 [INFO][4261] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.387 [INFO][4261] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.453 [INFO][4261] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.473 [INFO][4261] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.540 [INFO][4261] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.540 [INFO][4261] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.582 [INFO][4261] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.618 [INFO][4261] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.649 [INFO][4261] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.649 [INFO][4261] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" host="localhost" Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.649 [INFO][4261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:21.737391 containerd[1482]: 2026-04-25 00:07:21.649 [INFO][4261] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" HandleID="k8s-pod-network.226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.738884 containerd[1482]: 2026-04-25 00:07:21.657 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"8a7f5e11-f035-4efb-b64e-0dc54e087c6e", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-wxzlr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie790da6c259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:21.738884 containerd[1482]: 2026-04-25 00:07:21.658 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.738884 containerd[1482]: 2026-04-25 00:07:21.660 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie790da6c259 ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.738884 containerd[1482]: 2026-04-25 00:07:21.679 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.738884 containerd[1482]: 2026-04-25 00:07:21.681 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"8a7f5e11-f035-4efb-b64e-0dc54e087c6e", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe", Pod:"goldmane-9f7667bb8-wxzlr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie790da6c259", MAC:"56:d6:9a:09:94:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:21.738884 containerd[1482]: 2026-04-25 00:07:21.727 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxzlr" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:21.764335 systemd-networkd[1400]: calie9cf27fa8e1: Gained IPv6LL Apr 25 00:07:21.799570 containerd[1482]: time="2026-04-25T00:07:21.798114278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:21.799570 containerd[1482]: time="2026-04-25T00:07:21.798151652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:21.799570 containerd[1482]: time="2026-04-25T00:07:21.798164342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:21.799570 containerd[1482]: time="2026-04-25T00:07:21.799478006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:21.820922 containerd[1482]: time="2026-04-25T00:07:21.820823290Z" level=info msg="StartContainer for \"b85e740a02b1a994a6d7030745161f5051ca6a87207c4c0131584b8c8d6e0487\" returns successfully" Apr 25 00:07:21.996729 containerd[1482]: time="2026-04-25T00:07:21.996093486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-6pnn8,Uid:c6c7637f-a019-4d59-9ef0-740af17d1030,Namespace:calico-system,Attempt:1,} returns sandbox id \"deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec\"" Apr 25 00:07:22.051014 systemd[1]: Started cri-containerd-226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe.scope - libcontainer container 226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe. Apr 25 00:07:22.114087 systemd-networkd[1400]: cali5bdb6a5fa83: Link UP Apr 25 00:07:22.116181 systemd-networkd[1400]: cali5bdb6a5fa83: Gained carrier Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:20.262 [ERROR][4273] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:20.334 [INFO][4273] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7456c955c4--ctk9r-eth0 whisker-7456c955c4- calico-system c228283b-be94-46da-acfa-f2f433bacb39 953 0 2026-04-25 00:07:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7456c955c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7456c955c4-ctk9r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5bdb6a5fa83 [] [] }} ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:20.334 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:20.637 [INFO][4311] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" HandleID="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Workload="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:20.653 [INFO][4311] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" HandleID="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Workload="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bdb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7456c955c4-ctk9r", "timestamp":"2026-04-25 00:07:20.637569037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039f4a0)} Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:20.654 [INFO][4311] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.649 [INFO][4311] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.650 [INFO][4311] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.662 [INFO][4311] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.679 [INFO][4311] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.732 [INFO][4311] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.751 [INFO][4311] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.766 [INFO][4311] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.769 [INFO][4311] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.786 [INFO][4311] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7 Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.867 [INFO][4311] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.998 [INFO][4311] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.999 [INFO][4311] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" host="localhost" Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.999 [INFO][4311] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:22.147550 containerd[1482]: 2026-04-25 00:07:21.999 [INFO][4311] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" HandleID="k8s-pod-network.9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Workload="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.149026 containerd[1482]: 2026-04-25 00:07:22.061 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7456c955c4--ctk9r-eth0", GenerateName:"whisker-7456c955c4-", Namespace:"calico-system", SelfLink:"", UID:"c228283b-be94-46da-acfa-f2f433bacb39", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 7, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7456c955c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7456c955c4-ctk9r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5bdb6a5fa83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:22.149026 containerd[1482]: 2026-04-25 00:07:22.077 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.149026 containerd[1482]: 2026-04-25 00:07:22.083 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bdb6a5fa83 ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.149026 containerd[1482]: 2026-04-25 00:07:22.127 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.149026 containerd[1482]: 2026-04-25 00:07:22.128 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7456c955c4--ctk9r-eth0", GenerateName:"whisker-7456c955c4-", Namespace:"calico-system", SelfLink:"", UID:"c228283b-be94-46da-acfa-f2f433bacb39", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 7, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7456c955c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7", Pod:"whisker-7456c955c4-ctk9r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5bdb6a5fa83", MAC:"32:60:a7:1e:8d:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:22.149026 containerd[1482]: 2026-04-25 00:07:22.144 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7" Namespace="calico-system" Pod="whisker-7456c955c4-ctk9r" WorkloadEndpoint="localhost-k8s-whisker--7456c955c4--ctk9r-eth0" Apr 25 00:07:22.149962 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:22.167347 containerd[1482]: time="2026-04-25T00:07:22.167237254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:22.167347 containerd[1482]: time="2026-04-25T00:07:22.167313527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:22.167347 containerd[1482]: time="2026-04-25T00:07:22.167323795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:22.167574 containerd[1482]: time="2026-04-25T00:07:22.167420831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:22.225054 systemd[1]: Started cri-containerd-9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7.scope - libcontainer container 9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7. Apr 25 00:07:22.228647 containerd[1482]: time="2026-04-25T00:07:22.226023957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxzlr,Uid:8a7f5e11-f035-4efb-b64e-0dc54e087c6e,Namespace:calico-system,Attempt:1,} returns sandbox id \"226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe\"" Apr 25 00:07:22.237964 systemd-networkd[1400]: vxlan.calico: Link UP Apr 25 00:07:22.238200 systemd-networkd[1400]: vxlan.calico: Gained carrier Apr 25 00:07:22.266319 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:22.301243 containerd[1482]: time="2026-04-25T00:07:22.301188588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7456c955c4-ctk9r,Uid:c228283b-be94-46da-acfa-f2f433bacb39,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7\"" Apr 25 00:07:22.466203 kubelet[2527]: E0425 00:07:22.465855 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:22.469865 kubelet[2527]: E0425 00:07:22.469079 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:22.520183 kubelet[2527]: I0425 00:07:22.518550 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-qhqdg" podStartSLOduration=40.515359023 podStartE2EDuration="40.515359023s" podCreationTimestamp="2026-04-25 00:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:07:22.503245937 +0000 UTC m=+47.180056198" watchObservedRunningTime="2026-04-25 00:07:22.515359023 +0000 UTC m=+47.192169255" Apr 25 00:07:22.533439 systemd-networkd[1400]: calif09a82f870c: Gained IPv6LL Apr 25 00:07:22.536234 systemd-networkd[1400]: calic20ea658450: Gained IPv6LL Apr 25 00:07:23.481297 kubelet[2527]: E0425 00:07:23.481051 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:23.484440 kubelet[2527]: E0425 00:07:23.483308 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:23.493753 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Apr 25 00:07:23.620630 systemd-networkd[1400]: calie790da6c259: Gained IPv6LL Apr 25 00:07:23.812076 systemd-networkd[1400]: cali5bdb6a5fa83: Gained IPv6LL Apr 25 00:07:24.490015 kubelet[2527]: E0425 00:07:24.489668 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:24.512137 kubelet[2527]: E0425 00:07:24.510771 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:24.564110 containerd[1482]: time="2026-04-25T00:07:24.563148140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:24.567163 containerd[1482]: time="2026-04-25T00:07:24.566797580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 25 00:07:24.568301 containerd[1482]: time="2026-04-25T00:07:24.568269062Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:24.571893 containerd[1482]: time="2026-04-25T00:07:24.571848248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:24.572427 containerd[1482]: time="2026-04-25T00:07:24.572366231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.520822172s" Apr 25 00:07:24.572427 containerd[1482]: time="2026-04-25T00:07:24.572417598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 25 00:07:24.574047 containerd[1482]: time="2026-04-25T00:07:24.573959875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 25 00:07:24.593696 containerd[1482]: time="2026-04-25T00:07:24.593658170Z" level=info msg="CreateContainer within sandbox \"03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 25 00:07:24.627424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737392498.mount: Deactivated successfully. Apr 25 00:07:24.639840 containerd[1482]: time="2026-04-25T00:07:24.639620631Z" level=info msg="CreateContainer within sandbox \"03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ff252c65dbec501509c94304bd75d243a1bae700b50f2ddc9564d8d6f951a72e\"" Apr 25 00:07:24.646608 containerd[1482]: time="2026-04-25T00:07:24.646281551Z" level=info msg="StartContainer for \"ff252c65dbec501509c94304bd75d243a1bae700b50f2ddc9564d8d6f951a72e\"" Apr 25 00:07:24.722069 systemd[1]: Started cri-containerd-ff252c65dbec501509c94304bd75d243a1bae700b50f2ddc9564d8d6f951a72e.scope - libcontainer container ff252c65dbec501509c94304bd75d243a1bae700b50f2ddc9564d8d6f951a72e. Apr 25 00:07:24.843361 containerd[1482]: time="2026-04-25T00:07:24.843191946Z" level=info msg="StartContainer for \"ff252c65dbec501509c94304bd75d243a1bae700b50f2ddc9564d8d6f951a72e\" returns successfully" Apr 25 00:07:25.566120 kubelet[2527]: I0425 00:07:25.566012 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-746bc57ccf-bcnv6" podStartSLOduration=26.969579438 podStartE2EDuration="30.565975857s" podCreationTimestamp="2026-04-25 00:06:55 +0000 UTC" firstStartedPulling="2026-04-25 00:07:20.977335083 +0000 UTC m=+45.654145308" lastFinishedPulling="2026-04-25 00:07:24.573731501 +0000 UTC m=+49.250541727" observedRunningTime="2026-04-25 00:07:25.56462151 +0000 UTC m=+50.241431744" watchObservedRunningTime="2026-04-25 00:07:25.565975857 +0000 UTC m=+50.242786093" Apr 25 00:07:26.026830 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). Apr 25 00:07:26.091377 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:26.094837 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:26.112124 systemd-logind[1458]: New session 9 of user core. Apr 25 00:07:26.119434 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 25 00:07:26.265573 sshd[4952]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:26.269072 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:35932.service: Deactivated successfully. Apr 25 00:07:26.270888 systemd[1]: session-9.scope: Deactivated successfully. Apr 25 00:07:26.271475 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Apr 25 00:07:26.272173 systemd-logind[1458]: Removed session 9. Apr 25 00:07:26.558551 containerd[1482]: time="2026-04-25T00:07:26.558202616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:26.561116 containerd[1482]: time="2026-04-25T00:07:26.559929516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 25 00:07:26.561462 containerd[1482]: time="2026-04-25T00:07:26.561435709Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:26.564072 containerd[1482]: time="2026-04-25T00:07:26.564028950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:26.564712 containerd[1482]: time="2026-04-25T00:07:26.564687105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.990645239s" Apr 25 00:07:26.564750 containerd[1482]: time="2026-04-25T00:07:26.564717623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 25 00:07:26.567519 containerd[1482]: time="2026-04-25T00:07:26.567488549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 00:07:26.581230 containerd[1482]: time="2026-04-25T00:07:26.581071681Z" level=info msg="CreateContainer within sandbox \"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 25 00:07:26.601259 containerd[1482]: time="2026-04-25T00:07:26.601153783Z" level=info msg="CreateContainer within sandbox \"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cd7c03791acc68cf7ce93ee3427ca16aeea599b8c5ab5c5f0213f254e5902715\"" Apr 25 00:07:26.602810 containerd[1482]: time="2026-04-25T00:07:26.602747538Z" level=info msg="StartContainer for \"cd7c03791acc68cf7ce93ee3427ca16aeea599b8c5ab5c5f0213f254e5902715\"" Apr 25 00:07:26.639645 systemd[1]: Started cri-containerd-cd7c03791acc68cf7ce93ee3427ca16aeea599b8c5ab5c5f0213f254e5902715.scope - libcontainer container cd7c03791acc68cf7ce93ee3427ca16aeea599b8c5ab5c5f0213f254e5902715. Apr 25 00:07:26.661281 containerd[1482]: time="2026-04-25T00:07:26.661243057Z" level=info msg="StartContainer for \"cd7c03791acc68cf7ce93ee3427ca16aeea599b8c5ab5c5f0213f254e5902715\" returns successfully" Apr 25 00:07:29.819892 containerd[1482]: time="2026-04-25T00:07:29.819599886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:29.823113 containerd[1482]: time="2026-04-25T00:07:29.820302570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 25 00:07:29.823113 containerd[1482]: time="2026-04-25T00:07:29.822261488Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:29.824786 containerd[1482]: time="2026-04-25T00:07:29.824753800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:29.825477 containerd[1482]: time="2026-04-25T00:07:29.825448338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.25793077s" Apr 25 00:07:29.825477 containerd[1482]: time="2026-04-25T00:07:29.825478042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 25 00:07:29.828921 containerd[1482]: time="2026-04-25T00:07:29.828873083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 25 00:07:29.832715 containerd[1482]: time="2026-04-25T00:07:29.832655977Z" level=info msg="CreateContainer within sandbox \"deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 00:07:29.856895 containerd[1482]: time="2026-04-25T00:07:29.856819147Z" level=info msg="CreateContainer within sandbox \"deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"baa53d5fdb759dc28a32461dbeb4598a4fa7d7abdfc3ce30bb31ffc452884158\"" Apr 25 00:07:29.857630 containerd[1482]: time="2026-04-25T00:07:29.857595420Z" level=info msg="StartContainer for \"baa53d5fdb759dc28a32461dbeb4598a4fa7d7abdfc3ce30bb31ffc452884158\"" Apr 25 00:07:29.888624 systemd[1]: Started cri-containerd-baa53d5fdb759dc28a32461dbeb4598a4fa7d7abdfc3ce30bb31ffc452884158.scope - libcontainer container baa53d5fdb759dc28a32461dbeb4598a4fa7d7abdfc3ce30bb31ffc452884158. Apr 25 00:07:29.960068 containerd[1482]: time="2026-04-25T00:07:29.959836443Z" level=info msg="StartContainer for \"baa53d5fdb759dc28a32461dbeb4598a4fa7d7abdfc3ce30bb31ffc452884158\" returns successfully" Apr 25 00:07:30.610225 kubelet[2527]: I0425 00:07:30.608944 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7bfd9bd8c4-6pnn8" podStartSLOduration=29.801676183 podStartE2EDuration="37.607945921s" podCreationTimestamp="2026-04-25 00:06:53 +0000 UTC" firstStartedPulling="2026-04-25 00:07:22.022271507 +0000 UTC m=+46.699081732" lastFinishedPulling="2026-04-25 00:07:29.828541238 +0000 UTC m=+54.505351470" observedRunningTime="2026-04-25 00:07:30.59874657 +0000 UTC m=+55.275556806" watchObservedRunningTime="2026-04-25 00:07:30.607945921 +0000 UTC m=+55.284756156" Apr 25 00:07:31.323376 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). Apr 25 00:07:31.394901 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:31.397124 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:31.427503 systemd-logind[1458]: New session 10 of user core. Apr 25 00:07:31.468753 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 25 00:07:31.573627 kubelet[2527]: I0425 00:07:31.573437 2527 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:07:31.734386 sshd[5077]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:31.740424 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:52098.service: Deactivated successfully. Apr 25 00:07:31.750313 systemd[1]: session-10.scope: Deactivated successfully. Apr 25 00:07:31.753003 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Apr 25 00:07:31.754124 systemd-logind[1458]: Removed session 10. Apr 25 00:07:32.462378 containerd[1482]: time="2026-04-25T00:07:32.460293848Z" level=info msg="StopPodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\"" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.611 [INFO][5104] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.613 [INFO][5104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" iface="eth0" netns="/var/run/netns/cni-8755b1ac-00b0-bac1-a2f8-442bc3374902" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.613 [INFO][5104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" iface="eth0" netns="/var/run/netns/cni-8755b1ac-00b0-bac1-a2f8-442bc3374902" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.613 [INFO][5104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" iface="eth0" netns="/var/run/netns/cni-8755b1ac-00b0-bac1-a2f8-442bc3374902" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.614 [INFO][5104] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.614 [INFO][5104] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.671 [INFO][5112] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.671 [INFO][5112] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.671 [INFO][5112] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.682 [WARNING][5112] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.682 [INFO][5112] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.684 [INFO][5112] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:32.690091 containerd[1482]: 2026-04-25 00:07:32.686 [INFO][5104] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:32.690840 containerd[1482]: time="2026-04-25T00:07:32.690386703Z" level=info msg="TearDown network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" successfully" Apr 25 00:07:32.690840 containerd[1482]: time="2026-04-25T00:07:32.690434567Z" level=info msg="StopPodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" returns successfully" Apr 25 00:07:32.699380 systemd[1]: run-netns-cni\x2d8755b1ac\x2d00b0\x2dbac1\x2da2f8\x2d442bc3374902.mount: Deactivated successfully. Apr 25 00:07:32.724781 containerd[1482]: time="2026-04-25T00:07:32.724117375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-dkdwn,Uid:32405509-dcbc-4cec-81f7-5a2871f23270,Namespace:calico-system,Attempt:1,}" Apr 25 00:07:33.275247 systemd-networkd[1400]: cali0e3231d5084: Link UP Apr 25 00:07:33.276062 systemd-networkd[1400]: cali0e3231d5084: Gained carrier Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:32.906 [INFO][5119] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0 calico-apiserver-7bfd9bd8c4- calico-system 32405509-dcbc-4cec-81f7-5a2871f23270 1115 0 2026-04-25 00:06:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bfd9bd8c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bfd9bd8c4-dkdwn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0e3231d5084 [] [] }} ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:32.907 [INFO][5119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.049 [INFO][5136] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" HandleID="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.085 [INFO][5136] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" HandleID="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7bfd9bd8c4-dkdwn", "timestamp":"2026-04-25 00:07:33.049037266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017b4a0)} Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.085 [INFO][5136] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.085 [INFO][5136] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.086 [INFO][5136] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.095 [INFO][5136] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.134 [INFO][5136] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.191 [INFO][5136] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.195 [INFO][5136] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.199 [INFO][5136] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.199 [INFO][5136] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.200 [INFO][5136] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908 Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.205 [INFO][5136] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.223 [INFO][5136] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.233 [INFO][5136] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" host="localhost" Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.235 [INFO][5136] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:33.350555 containerd[1482]: 2026-04-25 00:07:33.245 [INFO][5136] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" HandleID="k8s-pod-network.7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.392791 containerd[1482]: 2026-04-25 00:07:33.255 [INFO][5119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"32405509-dcbc-4cec-81f7-5a2871f23270", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bfd9bd8c4-dkdwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e3231d5084", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:33.392791 containerd[1482]: 2026-04-25 00:07:33.258 [INFO][5119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.392791 containerd[1482]: 2026-04-25 00:07:33.259 [INFO][5119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e3231d5084 ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.392791 containerd[1482]: 2026-04-25 00:07:33.276 [INFO][5119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.392791 containerd[1482]: 2026-04-25 00:07:33.276 [INFO][5119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"32405509-dcbc-4cec-81f7-5a2871f23270", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908", Pod:"calico-apiserver-7bfd9bd8c4-dkdwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e3231d5084", MAC:"12:cd:25:2b:38:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:33.392791 containerd[1482]: 2026-04-25 00:07:33.318 [INFO][5119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908" Namespace="calico-system" Pod="calico-apiserver-7bfd9bd8c4-dkdwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:33.455021 containerd[1482]: time="2026-04-25T00:07:33.454867724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:07:33.455021 containerd[1482]: time="2026-04-25T00:07:33.454939566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:07:33.455021 containerd[1482]: time="2026-04-25T00:07:33.454961876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:33.456713 containerd[1482]: time="2026-04-25T00:07:33.455439750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:07:33.500701 systemd[1]: Started cri-containerd-7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908.scope - libcontainer container 7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908. Apr 25 00:07:33.562721 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:07:33.641339 containerd[1482]: time="2026-04-25T00:07:33.641100725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfd9bd8c4-dkdwn,Uid:32405509-dcbc-4cec-81f7-5a2871f23270,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908\"" Apr 25 00:07:33.656702 containerd[1482]: time="2026-04-25T00:07:33.656531337Z" level=info msg="CreateContainer within sandbox \"7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 00:07:33.724836 containerd[1482]: time="2026-04-25T00:07:33.724760330Z" level=info msg="CreateContainer within sandbox \"7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35bc310b7c8ae81e6ad31ad2e96ca1d7f38241cb594ffb68285976d3899e2781\"" Apr 25 00:07:33.732256 containerd[1482]: time="2026-04-25T00:07:33.731871465Z" level=info msg="StartContainer for \"35bc310b7c8ae81e6ad31ad2e96ca1d7f38241cb594ffb68285976d3899e2781\"" Apr 25 00:07:33.778234 systemd[1]: Started cri-containerd-35bc310b7c8ae81e6ad31ad2e96ca1d7f38241cb594ffb68285976d3899e2781.scope - libcontainer container 35bc310b7c8ae81e6ad31ad2e96ca1d7f38241cb594ffb68285976d3899e2781. Apr 25 00:07:33.846907 containerd[1482]: time="2026-04-25T00:07:33.842561923Z" level=info msg="StartContainer for \"35bc310b7c8ae81e6ad31ad2e96ca1d7f38241cb594ffb68285976d3899e2781\" returns successfully" Apr 25 00:07:34.171588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899597650.mount: Deactivated successfully. Apr 25 00:07:34.882816 containerd[1482]: time="2026-04-25T00:07:34.880014608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:34.882816 containerd[1482]: time="2026-04-25T00:07:34.904980874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 25 00:07:34.907738 containerd[1482]: time="2026-04-25T00:07:34.907672663Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:34.912423 containerd[1482]: time="2026-04-25T00:07:34.911295314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:34.912423 containerd[1482]: time="2026-04-25T00:07:34.911979921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.083076447s" Apr 25 00:07:34.912423 containerd[1482]: time="2026-04-25T00:07:34.912006639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 25 00:07:34.914530 containerd[1482]: time="2026-04-25T00:07:34.914502487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 25 00:07:34.928466 containerd[1482]: time="2026-04-25T00:07:34.928274853Z" level=info msg="CreateContainer within sandbox \"226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 25 00:07:34.949321 systemd-networkd[1400]: cali0e3231d5084: Gained IPv6LL Apr 25 00:07:34.951472 containerd[1482]: time="2026-04-25T00:07:34.950451261Z" level=info msg="CreateContainer within sandbox \"226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fb760b7b48bee9c4145cbf058684529f50034b8a65705b479b84b74be2b497cf\"" Apr 25 00:07:34.954011 containerd[1482]: time="2026-04-25T00:07:34.953734436Z" level=info msg="StartContainer for \"fb760b7b48bee9c4145cbf058684529f50034b8a65705b479b84b74be2b497cf\"" Apr 25 00:07:35.044650 systemd[1]: Started cri-containerd-fb760b7b48bee9c4145cbf058684529f50034b8a65705b479b84b74be2b497cf.scope - libcontainer container fb760b7b48bee9c4145cbf058684529f50034b8a65705b479b84b74be2b497cf. Apr 25 00:07:35.094662 containerd[1482]: time="2026-04-25T00:07:35.094531834Z" level=info msg="StartContainer for \"fb760b7b48bee9c4145cbf058684529f50034b8a65705b479b84b74be2b497cf\" returns successfully" Apr 25 00:07:35.417428 containerd[1482]: time="2026-04-25T00:07:35.417242824Z" level=info msg="StopPodSandbox for \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\"" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.567 [WARNING][5329] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--qhqdg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6832a17a-615d-4511-81ef-78e8a9d1028f", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48", Pod:"coredns-7d764666f9-qhqdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic20ea658450", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.567 [INFO][5329] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.567 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" iface="eth0" netns="" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.567 [INFO][5329] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.567 [INFO][5329] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.598 [INFO][5339] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.598 [INFO][5339] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.599 [INFO][5339] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.628 [WARNING][5339] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.628 [INFO][5339] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.639 [INFO][5339] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:35.657246 containerd[1482]: 2026-04-25 00:07:35.647 [INFO][5329] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:35.657246 containerd[1482]: time="2026-04-25T00:07:35.657060074Z" level=info msg="TearDown network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\" successfully" Apr 25 00:07:35.657246 containerd[1482]: time="2026-04-25T00:07:35.657124561Z" level=info msg="StopPodSandbox for \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\" returns successfully" Apr 25 00:07:35.815421 kubelet[2527]: I0425 00:07:35.813590 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7bfd9bd8c4-dkdwn" podStartSLOduration=42.812643551 podStartE2EDuration="42.812643551s" podCreationTimestamp="2026-04-25 00:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:07:34.633224491 +0000 UTC m=+59.310034728" watchObservedRunningTime="2026-04-25 00:07:35.812643551 +0000 UTC m=+60.489453781" Apr 25 00:07:35.817392 kubelet[2527]: I0425 00:07:35.816043 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-wxzlr" podStartSLOduration=29.130570014 podStartE2EDuration="41.815980571s" podCreationTimestamp="2026-04-25 00:06:54 +0000 UTC" firstStartedPulling="2026-04-25 00:07:22.228309841 +0000 UTC m=+46.905120067" lastFinishedPulling="2026-04-25 00:07:34.913720399 +0000 UTC m=+59.590530624" observedRunningTime="2026-04-25 00:07:35.79274523 +0000 UTC m=+60.469555466" watchObservedRunningTime="2026-04-25 00:07:35.815980571 +0000 UTC m=+60.492790796" Apr 25 00:07:35.874662 containerd[1482]: time="2026-04-25T00:07:35.874611776Z" level=info msg="RemovePodSandbox for \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\"" Apr 25 00:07:35.880512 containerd[1482]: time="2026-04-25T00:07:35.880464948Z" level=info msg="Forcibly stopping sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\"" Apr 25 00:07:36.010500 kubelet[2527]: I0425 00:07:36.008031 2527 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:35.967 [WARNING][5360] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--qhqdg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6832a17a-615d-4511-81ef-78e8a9d1028f", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7456352d1780bc806471fd2f933bbf701a59189ef929e9bb05916e578ad2ff48", Pod:"coredns-7d764666f9-qhqdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic20ea658450", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:35.968 [INFO][5360] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:35.968 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" iface="eth0" netns="" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:35.968 [INFO][5360] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:35.968 [INFO][5360] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.071 [INFO][5370] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.071 [INFO][5370] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.071 [INFO][5370] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.105 [WARNING][5370] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.106 [INFO][5370] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" HandleID="k8s-pod-network.195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Workload="localhost-k8s-coredns--7d764666f9--qhqdg-eth0" Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.124 [INFO][5370] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:36.130301 containerd[1482]: 2026-04-25 00:07:36.127 [INFO][5360] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87" Apr 25 00:07:36.130301 containerd[1482]: time="2026-04-25T00:07:36.130068991Z" level=info msg="TearDown network for sandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\" successfully" Apr 25 00:07:36.215183 containerd[1482]: time="2026-04-25T00:07:36.211140281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:36.215183 containerd[1482]: time="2026-04-25T00:07:36.212340345Z" level=info msg="RemovePodSandbox \"195a0a307c2546e4bb7dc9bf25eaeda2b85d799c40f45a29461ec2d1d6f12e87\" returns successfully" Apr 25 00:07:36.241779 containerd[1482]: time="2026-04-25T00:07:36.241564191Z" level=info msg="StopPodSandbox for \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\"" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.443 [WARNING][5387] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0", GenerateName:"calico-kube-controllers-746bc57ccf-", Namespace:"calico-system", SelfLink:"", UID:"b5fc7c52-af2d-46b0-991d-30bddad7f47f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746bc57ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0", Pod:"calico-kube-controllers-746bc57ccf-bcnv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9cf27fa8e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.444 [INFO][5387] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.444 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" iface="eth0" netns="" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.444 [INFO][5387] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.444 [INFO][5387] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.474 [INFO][5396] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.474 [INFO][5396] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.475 [INFO][5396] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.494 [WARNING][5396] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.494 [INFO][5396] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.507 [INFO][5396] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:36.517589 containerd[1482]: 2026-04-25 00:07:36.514 [INFO][5387] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.517589 containerd[1482]: time="2026-04-25T00:07:36.517464379Z" level=info msg="TearDown network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\" successfully" Apr 25 00:07:36.517589 containerd[1482]: time="2026-04-25T00:07:36.517494441Z" level=info msg="StopPodSandbox for \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\" returns successfully" Apr 25 00:07:36.518653 containerd[1482]: time="2026-04-25T00:07:36.518622052Z" level=info msg="RemovePodSandbox for \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\"" Apr 25 00:07:36.518684 containerd[1482]: time="2026-04-25T00:07:36.518657035Z" level=info msg="Forcibly stopping sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\"" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.607 [WARNING][5416] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0", GenerateName:"calico-kube-controllers-746bc57ccf-", Namespace:"calico-system", SelfLink:"", UID:"b5fc7c52-af2d-46b0-991d-30bddad7f47f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746bc57ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03ef0810de6e468d1b16313df668c141dd28bd99e5bda1e4ccc5d518d6efb2b0", Pod:"calico-kube-controllers-746bc57ccf-bcnv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9cf27fa8e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.608 [INFO][5416] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.608 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" iface="eth0" netns="" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.608 [INFO][5416] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.608 [INFO][5416] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.633 [INFO][5425] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.633 [INFO][5425] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.633 [INFO][5425] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.644 [WARNING][5425] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.646 [INFO][5425] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" HandleID="k8s-pod-network.0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Workload="localhost-k8s-calico--kube--controllers--746bc57ccf--bcnv6-eth0" Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.659 [INFO][5425] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:36.670962 containerd[1482]: 2026-04-25 00:07:36.664 [INFO][5416] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049" Apr 25 00:07:36.672038 containerd[1482]: time="2026-04-25T00:07:36.671389188Z" level=info msg="TearDown network for sandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\" successfully" Apr 25 00:07:36.679892 containerd[1482]: time="2026-04-25T00:07:36.679730330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:36.679892 containerd[1482]: time="2026-04-25T00:07:36.679963446Z" level=info msg="RemovePodSandbox \"0d3f1e4b001b25beb75cac0611ca2b9d04380badecae002f7246224ca4412049\" returns successfully" Apr 25 00:07:36.685485 containerd[1482]: time="2026-04-25T00:07:36.681082008Z" level=info msg="StopPodSandbox for \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\"" Apr 25 00:07:36.763879 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:52100.service - OpenSSH per-connection server daemon (10.0.0.1:52100). Apr 25 00:07:36.811840 sshd[5473]: Accepted publickey for core from 10.0.0.1 port 52100 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:36.815799 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.760 [WARNING][5460] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--2wggw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f40b2e4-ac3e-4645-a45c-301ecaa49eb6", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce", Pod:"coredns-7d764666f9-2wggw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali323ccba9519", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.761 [INFO][5460] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.761 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" iface="eth0" netns="" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.761 [INFO][5460] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.761 [INFO][5460] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.796 [INFO][5475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.797 [INFO][5475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.797 [INFO][5475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.813 [WARNING][5475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.814 [INFO][5475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.815 [INFO][5475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:36.820563 containerd[1482]: 2026-04-25 00:07:36.818 [INFO][5460] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:36.826312 containerd[1482]: time="2026-04-25T00:07:36.820593880Z" level=info msg="TearDown network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\" successfully" Apr 25 00:07:36.826312 containerd[1482]: time="2026-04-25T00:07:36.820613549Z" level=info msg="StopPodSandbox for \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\" returns successfully" Apr 25 00:07:36.826960 containerd[1482]: time="2026-04-25T00:07:36.826882509Z" level=info msg="RemovePodSandbox for \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\"" Apr 25 00:07:36.827033 containerd[1482]: time="2026-04-25T00:07:36.827011611Z" level=info msg="Forcibly stopping sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\"" Apr 25 00:07:36.829123 systemd-logind[1458]: New session 11 of user core. Apr 25 00:07:36.834569 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:36.910 [WARNING][5497] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--2wggw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f40b2e4-ac3e-4645-a45c-301ecaa49eb6", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e969e031eb06ee9fde71e20b890d8af35fd14c1b3e5212a727c8a0553efdfce", Pod:"coredns-7d764666f9-2wggw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali323ccba9519", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:36.911 [INFO][5497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:36.911 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" iface="eth0" netns="" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:36.911 [INFO][5497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:36.911 [INFO][5497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.039 [INFO][5510] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.040 [INFO][5510] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.040 [INFO][5510] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.057 [WARNING][5510] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.058 [INFO][5510] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" HandleID="k8s-pod-network.63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Workload="localhost-k8s-coredns--7d764666f9--2wggw-eth0" Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.062 [INFO][5510] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:37.072272 containerd[1482]: 2026-04-25 00:07:37.068 [INFO][5497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a" Apr 25 00:07:37.072272 containerd[1482]: time="2026-04-25T00:07:37.071933260Z" level=info msg="TearDown network for sandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\" successfully" Apr 25 00:07:37.081733 containerd[1482]: time="2026-04-25T00:07:37.081482950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:37.083322 containerd[1482]: time="2026-04-25T00:07:37.082974025Z" level=info msg="RemovePodSandbox \"63ed77ea3e84f92d705ec43b90e2e7d8d5e2d946c7e624c287c1334cd3ea4a6a\" returns successfully" Apr 25 00:07:37.086217 containerd[1482]: time="2026-04-25T00:07:37.086199047Z" level=info msg="StopPodSandbox for \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\"" Apr 25 00:07:37.109535 containerd[1482]: time="2026-04-25T00:07:37.109487551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:37.111684 containerd[1482]: time="2026-04-25T00:07:37.111445366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 25 00:07:37.113366 containerd[1482]: time="2026-04-25T00:07:37.113313220Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:37.116982 containerd[1482]: time="2026-04-25T00:07:37.116804255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:37.117675 containerd[1482]: time="2026-04-25T00:07:37.117631539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.202550119s" Apr 25 00:07:37.117675 containerd[1482]: time="2026-04-25T00:07:37.117669179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 25 00:07:37.123370 containerd[1482]: time="2026-04-25T00:07:37.123331874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 25 00:07:37.133182 containerd[1482]: time="2026-04-25T00:07:37.133072034Z" level=info msg="CreateContainer within sandbox \"9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 25 00:07:37.201496 containerd[1482]: time="2026-04-25T00:07:37.198802985Z" level=info msg="CreateContainer within sandbox \"9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3dc5c3e7653c3a817d5dfd52a4942945207dd80d55c7475f0956aac6886e7f2c\"" Apr 25 00:07:37.202802 containerd[1482]: time="2026-04-25T00:07:37.202572545Z" level=info msg="StartContainer for \"3dc5c3e7653c3a817d5dfd52a4942945207dd80d55c7475f0956aac6886e7f2c\"" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.132 [WARNING][5534] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"c6c7637f-a019-4d59-9ef0-740af17d1030", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec", Pod:"calico-apiserver-7bfd9bd8c4-6pnn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif09a82f870c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.133 [INFO][5534] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.133 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" iface="eth0" netns="" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.133 [INFO][5534] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.133 [INFO][5534] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.198 [INFO][5544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.198 [INFO][5544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.198 [INFO][5544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.209 [WARNING][5544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.209 [INFO][5544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.210 [INFO][5544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:37.214286 containerd[1482]: 2026-04-25 00:07:37.212 [INFO][5534] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.214286 containerd[1482]: time="2026-04-25T00:07:37.213945613Z" level=info msg="TearDown network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\" successfully" Apr 25 00:07:37.214286 containerd[1482]: time="2026-04-25T00:07:37.213971364Z" level=info msg="StopPodSandbox for \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\" returns successfully" Apr 25 00:07:37.214763 containerd[1482]: time="2026-04-25T00:07:37.214491030Z" level=info msg="RemovePodSandbox for \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\"" Apr 25 00:07:37.214763 containerd[1482]: time="2026-04-25T00:07:37.214553963Z" level=info msg="Forcibly stopping sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\"" Apr 25 00:07:37.262128 systemd[1]: Started cri-containerd-3dc5c3e7653c3a817d5dfd52a4942945207dd80d55c7475f0956aac6886e7f2c.scope - libcontainer container 3dc5c3e7653c3a817d5dfd52a4942945207dd80d55c7475f0956aac6886e7f2c. Apr 25 00:07:37.284797 sshd[5473]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:37.289018 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:52100.service: Deactivated successfully. Apr 25 00:07:37.299307 systemd[1]: session-11.scope: Deactivated successfully. Apr 25 00:07:37.302057 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Apr 25 00:07:37.309817 systemd-logind[1458]: Removed session 11. Apr 25 00:07:37.374906 containerd[1482]: time="2026-04-25T00:07:37.374565573Z" level=info msg="StartContainer for \"3dc5c3e7653c3a817d5dfd52a4942945207dd80d55c7475f0956aac6886e7f2c\" returns successfully" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.323 [WARNING][5569] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"c6c7637f-a019-4d59-9ef0-740af17d1030", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deae89729e6e00ab020e79e75e407c6de13350842256c13be8f21966dd5a09ec", Pod:"calico-apiserver-7bfd9bd8c4-6pnn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif09a82f870c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.324 [INFO][5569] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.324 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" iface="eth0" netns="" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.324 [INFO][5569] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.324 [INFO][5569] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.387 [INFO][5595] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.388 [INFO][5595] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.388 [INFO][5595] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.493 [WARNING][5595] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.493 [INFO][5595] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" HandleID="k8s-pod-network.f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--6pnn8-eth0" Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.522 [INFO][5595] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:37.527036 containerd[1482]: 2026-04-25 00:07:37.524 [INFO][5569] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2" Apr 25 00:07:37.527036 containerd[1482]: time="2026-04-25T00:07:37.526984802Z" level=info msg="TearDown network for sandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\" successfully" Apr 25 00:07:37.534028 containerd[1482]: time="2026-04-25T00:07:37.533742952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:37.534028 containerd[1482]: time="2026-04-25T00:07:37.533932702Z" level=info msg="RemovePodSandbox \"f1504703fef588c5e40f634486eda1f29b3a49870504b59a27d44141b567e3d2\" returns successfully" Apr 25 00:07:37.548665 containerd[1482]: time="2026-04-25T00:07:37.548096762Z" level=info msg="StopPodSandbox for \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\"" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.765 [WARNING][5625] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" WorkloadEndpoint="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.766 [INFO][5625] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.766 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" iface="eth0" netns="" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.766 [INFO][5625] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.766 [INFO][5625] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.804 [INFO][5635] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.804 [INFO][5635] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.804 [INFO][5635] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.828 [WARNING][5635] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.829 [INFO][5635] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.835 [INFO][5635] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:37.839504 containerd[1482]: 2026-04-25 00:07:37.837 [INFO][5625] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.840961 containerd[1482]: time="2026-04-25T00:07:37.840315806Z" level=info msg="TearDown network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\" successfully" Apr 25 00:07:37.840961 containerd[1482]: time="2026-04-25T00:07:37.840380068Z" level=info msg="StopPodSandbox for \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\" returns successfully" Apr 25 00:07:37.841097 containerd[1482]: time="2026-04-25T00:07:37.841057416Z" level=info msg="RemovePodSandbox for \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\"" Apr 25 00:07:37.841143 containerd[1482]: time="2026-04-25T00:07:37.841118878Z" level=info msg="Forcibly stopping sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\"" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.899 [WARNING][5677] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" WorkloadEndpoint="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.899 [INFO][5677] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.899 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" iface="eth0" netns="" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.899 [INFO][5677] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.899 [INFO][5677] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.957 [INFO][5686] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.959 [INFO][5686] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.959 [INFO][5686] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.970 [WARNING][5686] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.970 [INFO][5686] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" HandleID="k8s-pod-network.5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Workload="localhost-k8s-whisker--b49f7945--25tvh-eth0" Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.987 [INFO][5686] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:37.996533 containerd[1482]: 2026-04-25 00:07:37.993 [INFO][5677] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf" Apr 25 00:07:38.002554 containerd[1482]: time="2026-04-25T00:07:37.998070062Z" level=info msg="TearDown network for sandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\" successfully" Apr 25 00:07:38.011179 containerd[1482]: time="2026-04-25T00:07:38.010778270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:38.011179 containerd[1482]: time="2026-04-25T00:07:38.011072272Z" level=info msg="RemovePodSandbox \"5f3015e278cbe074d6144d5e5c54ac048899a9e433148c8900f58bf72d576faf\" returns successfully" Apr 25 00:07:38.014018 containerd[1482]: time="2026-04-25T00:07:38.013220229Z" level=info msg="StopPodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\"" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.081 [WARNING][5703] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"32405509-dcbc-4cec-81f7-5a2871f23270", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908", Pod:"calico-apiserver-7bfd9bd8c4-dkdwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e3231d5084", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.081 [INFO][5703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.081 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" iface="eth0" netns="" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.081 [INFO][5703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.081 [INFO][5703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.108 [INFO][5711] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.108 [INFO][5711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.108 [INFO][5711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.119 [WARNING][5711] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.119 [INFO][5711] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.129 [INFO][5711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:38.134525 containerd[1482]: 2026-04-25 00:07:38.132 [INFO][5703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.134525 containerd[1482]: time="2026-04-25T00:07:38.134293100Z" level=info msg="TearDown network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" successfully" Apr 25 00:07:38.134525 containerd[1482]: time="2026-04-25T00:07:38.134379396Z" level=info msg="StopPodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" returns successfully" Apr 25 00:07:38.140904 containerd[1482]: time="2026-04-25T00:07:38.140820872Z" level=info msg="RemovePodSandbox for \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\"" Apr 25 00:07:38.140942 containerd[1482]: time="2026-04-25T00:07:38.140930713Z" level=info msg="Forcibly stopping sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\"" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.204 [WARNING][5728] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0", GenerateName:"calico-apiserver-7bfd9bd8c4-", Namespace:"calico-system", SelfLink:"", UID:"32405509-dcbc-4cec-81f7-5a2871f23270", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfd9bd8c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e77b219f42431e8e4611d1117d347d21f5422e47eeeb6d2edcf293eb0aff908", Pod:"calico-apiserver-7bfd9bd8c4-dkdwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e3231d5084", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.205 [INFO][5728] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.205 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" iface="eth0" netns="" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.205 [INFO][5728] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.205 [INFO][5728] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.237 [INFO][5737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.237 [INFO][5737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.237 [INFO][5737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.250 [WARNING][5737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.250 [INFO][5737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" HandleID="k8s-pod-network.1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Workload="localhost-k8s-calico--apiserver--7bfd9bd8c4--dkdwn-eth0" Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.259 [INFO][5737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:38.263833 containerd[1482]: 2026-04-25 00:07:38.261 [INFO][5728] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8" Apr 25 00:07:38.263833 containerd[1482]: time="2026-04-25T00:07:38.263963704Z" level=info msg="TearDown network for sandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" successfully" Apr 25 00:07:38.268192 containerd[1482]: time="2026-04-25T00:07:38.268167076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:38.268329 containerd[1482]: time="2026-04-25T00:07:38.268310604Z" level=info msg="RemovePodSandbox \"1fd867e1d87c54fe2d73ee5577d9e43d88cf7093aa4da0a81a4da55cf8b80da8\" returns successfully" Apr 25 00:07:38.270274 containerd[1482]: time="2026-04-25T00:07:38.270231245Z" level=info msg="StopPodSandbox for \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\"" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.352 [WARNING][5754] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lfm6g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"01cf6d9e-8d92-40f8-898e-724f0af87eaf", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac", Pod:"csi-node-driver-lfm6g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32a85d38c0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.353 [INFO][5754] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.353 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" iface="eth0" netns="" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.353 [INFO][5754] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.353 [INFO][5754] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.387 [INFO][5763] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.387 [INFO][5763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.387 [INFO][5763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.393 [WARNING][5763] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.393 [INFO][5763] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.394 [INFO][5763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:38.398070 containerd[1482]: 2026-04-25 00:07:38.396 [INFO][5754] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.398070 containerd[1482]: time="2026-04-25T00:07:38.398007069Z" level=info msg="TearDown network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\" successfully" Apr 25 00:07:38.398070 containerd[1482]: time="2026-04-25T00:07:38.398030770Z" level=info msg="StopPodSandbox for \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\" returns successfully" Apr 25 00:07:38.398800 containerd[1482]: time="2026-04-25T00:07:38.398708963Z" level=info msg="RemovePodSandbox for \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\"" Apr 25 00:07:38.398853 containerd[1482]: time="2026-04-25T00:07:38.398808576Z" level=info msg="Forcibly stopping sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\"" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.438 [WARNING][5780] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lfm6g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"01cf6d9e-8d92-40f8-898e-724f0af87eaf", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac", Pod:"csi-node-driver-lfm6g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32a85d38c0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.439 [INFO][5780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.439 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" iface="eth0" netns="" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.439 [INFO][5780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.439 [INFO][5780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.461 [INFO][5788] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.461 [INFO][5788] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.461 [INFO][5788] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.468 [WARNING][5788] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.468 [INFO][5788] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" HandleID="k8s-pod-network.0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Workload="localhost-k8s-csi--node--driver--lfm6g-eth0" Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.469 [INFO][5788] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:38.472364 containerd[1482]: 2026-04-25 00:07:38.470 [INFO][5780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29" Apr 25 00:07:38.472364 containerd[1482]: time="2026-04-25T00:07:38.472371226Z" level=info msg="TearDown network for sandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\" successfully" Apr 25 00:07:38.476376 containerd[1482]: time="2026-04-25T00:07:38.476337651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:38.476526 containerd[1482]: time="2026-04-25T00:07:38.476486664Z" level=info msg="RemovePodSandbox \"0a32578bf9a876127cf009208814022e8882e6d24b5d6e7b14c6cf5957380d29\" returns successfully" Apr 25 00:07:38.477297 containerd[1482]: time="2026-04-25T00:07:38.477248181Z" level=info msg="StopPodSandbox for \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\"" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.508 [WARNING][5807] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"8a7f5e11-f035-4efb-b64e-0dc54e087c6e", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe", Pod:"goldmane-9f7667bb8-wxzlr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie790da6c259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.508 [INFO][5807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.508 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" iface="eth0" netns="" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.508 [INFO][5807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.509 [INFO][5807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.535 [INFO][5815] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.536 [INFO][5815] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.536 [INFO][5815] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.552 [WARNING][5815] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.553 [INFO][5815] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.555 [INFO][5815] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:38.558796 containerd[1482]: 2026-04-25 00:07:38.557 [INFO][5807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.560015 containerd[1482]: time="2026-04-25T00:07:38.559059849Z" level=info msg="TearDown network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\" successfully" Apr 25 00:07:38.560015 containerd[1482]: time="2026-04-25T00:07:38.559105725Z" level=info msg="StopPodSandbox for \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\" returns successfully" Apr 25 00:07:38.563199 containerd[1482]: time="2026-04-25T00:07:38.562890309Z" level=info msg="RemovePodSandbox for \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\"" Apr 25 00:07:38.563199 containerd[1482]: time="2026-04-25T00:07:38.563041347Z" level=info msg="Forcibly stopping sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\"" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.620 [WARNING][5838] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"8a7f5e11-f035-4efb-b64e-0dc54e087c6e", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 6, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"226abffb2156e6b5dc3351d18de32ff4a4303d8c7e6ffc014d1b877b67b08bbe", Pod:"goldmane-9f7667bb8-wxzlr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie790da6c259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.620 [INFO][5838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.620 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" iface="eth0" netns="" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.620 [INFO][5838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.620 [INFO][5838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.642 [INFO][5847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.642 [INFO][5847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.642 [INFO][5847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.647 [WARNING][5847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.647 [INFO][5847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" HandleID="k8s-pod-network.4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Workload="localhost-k8s-goldmane--9f7667bb8--wxzlr-eth0" Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.649 [INFO][5847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:07:38.651972 containerd[1482]: 2026-04-25 00:07:38.650 [INFO][5838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a" Apr 25 00:07:38.651972 containerd[1482]: time="2026-04-25T00:07:38.651955314Z" level=info msg="TearDown network for sandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\" successfully" Apr 25 00:07:38.659239 containerd[1482]: time="2026-04-25T00:07:38.658965022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:07:38.659239 containerd[1482]: time="2026-04-25T00:07:38.659325030Z" level=info msg="RemovePodSandbox \"4a63780967d7508d9f8336525265190ae6480b720801ee6a939291742d5a051a\" returns successfully" Apr 25 00:07:39.309582 containerd[1482]: time="2026-04-25T00:07:39.309256869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:39.313140 containerd[1482]: time="2026-04-25T00:07:39.310155466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 25 00:07:39.313140 containerd[1482]: time="2026-04-25T00:07:39.312331769Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:39.319991 containerd[1482]: time="2026-04-25T00:07:39.319955509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:39.320662 containerd[1482]: time="2026-04-25T00:07:39.320634511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.197263612s" Apr 25 00:07:39.320731 containerd[1482]: time="2026-04-25T00:07:39.320664763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 25 00:07:39.323844 containerd[1482]: time="2026-04-25T00:07:39.323803785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 25 00:07:39.328753 containerd[1482]: time="2026-04-25T00:07:39.327967921Z" level=info msg="CreateContainer within sandbox \"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 25 00:07:39.378638 containerd[1482]: time="2026-04-25T00:07:39.378165200Z" level=info msg="CreateContainer within sandbox \"3f1f5cfd6ee47e5e83bbdf7c8e1d6ec16eb3968ca66b6fb8bbfae15d7e34e5ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"640ee0e5c10d45947392bde154725889e0583399333d77d2f9336d1916d620e6\"" Apr 25 00:07:39.381738 containerd[1482]: time="2026-04-25T00:07:39.381682920Z" level=info msg="StartContainer for \"640ee0e5c10d45947392bde154725889e0583399333d77d2f9336d1916d620e6\"" Apr 25 00:07:39.439078 systemd[1]: Started cri-containerd-640ee0e5c10d45947392bde154725889e0583399333d77d2f9336d1916d620e6.scope - libcontainer container 640ee0e5c10d45947392bde154725889e0583399333d77d2f9336d1916d620e6. Apr 25 00:07:39.527978 containerd[1482]: time="2026-04-25T00:07:39.527932377Z" level=info msg="StartContainer for \"640ee0e5c10d45947392bde154725889e0583399333d77d2f9336d1916d620e6\" returns successfully" Apr 25 00:07:40.190851 kubelet[2527]: I0425 00:07:40.190671 2527 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 25 00:07:40.195639 kubelet[2527]: I0425 00:07:40.195602 2527 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 25 00:07:42.063929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572568799.mount: Deactivated successfully. Apr 25 00:07:42.083764 containerd[1482]: time="2026-04-25T00:07:42.083453684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:42.089221 containerd[1482]: time="2026-04-25T00:07:42.084296926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 25 00:07:42.097030 containerd[1482]: time="2026-04-25T00:07:42.096781272Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:42.099423 containerd[1482]: time="2026-04-25T00:07:42.099349196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:07:42.099808 containerd[1482]: time="2026-04-25T00:07:42.099743954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.775905194s" Apr 25 00:07:42.099808 containerd[1482]: time="2026-04-25T00:07:42.099779452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 25 00:07:42.115276 containerd[1482]: time="2026-04-25T00:07:42.115108784Z" level=info msg="CreateContainer within sandbox \"9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 25 00:07:42.145954 containerd[1482]: time="2026-04-25T00:07:42.145815859Z" level=info msg="CreateContainer within sandbox \"9ff256f545ccec6fb7bd954553056e00d6c761abd9d0fc5ad8c589d9fe7ee3c7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4d25aa070bde8ccde55063fc6c9f1737fe4fb58f1dd3e8ea0083244d20163a2e\"" Apr 25 00:07:42.148575 containerd[1482]: time="2026-04-25T00:07:42.148164690Z" level=info msg="StartContainer for \"4d25aa070bde8ccde55063fc6c9f1737fe4fb58f1dd3e8ea0083244d20163a2e\"" Apr 25 00:07:42.184575 systemd[1]: Started cri-containerd-4d25aa070bde8ccde55063fc6c9f1737fe4fb58f1dd3e8ea0083244d20163a2e.scope - libcontainer container 4d25aa070bde8ccde55063fc6c9f1737fe4fb58f1dd3e8ea0083244d20163a2e. Apr 25 00:07:42.221242 containerd[1482]: time="2026-04-25T00:07:42.221205424Z" level=info msg="StartContainer for \"4d25aa070bde8ccde55063fc6c9f1737fe4fb58f1dd3e8ea0083244d20163a2e\" returns successfully" Apr 25 00:07:42.327946 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:57922.service - OpenSSH per-connection server daemon (10.0.0.1:57922). Apr 25 00:07:42.380085 sshd[5944]: Accepted publickey for core from 10.0.0.1 port 57922 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:42.381977 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:42.390758 systemd-logind[1458]: New session 12 of user core. Apr 25 00:07:42.403021 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 25 00:07:42.787617 sshd[5944]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:42.790380 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:57922.service: Deactivated successfully. Apr 25 00:07:42.791788 systemd[1]: session-12.scope: Deactivated successfully. Apr 25 00:07:42.792377 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Apr 25 00:07:42.793276 systemd-logind[1458]: Removed session 12. Apr 25 00:07:42.810455 kubelet[2527]: I0425 00:07:42.810149 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7456c955c4-ctk9r" podStartSLOduration=4.020740295 podStartE2EDuration="23.810133829s" podCreationTimestamp="2026-04-25 00:07:19 +0000 UTC" firstStartedPulling="2026-04-25 00:07:22.313210602 +0000 UTC m=+46.990020826" lastFinishedPulling="2026-04-25 00:07:42.102604135 +0000 UTC m=+66.779414360" observedRunningTime="2026-04-25 00:07:42.810027171 +0000 UTC m=+67.486837406" watchObservedRunningTime="2026-04-25 00:07:42.810133829 +0000 UTC m=+67.486944055" Apr 25 00:07:42.810455 kubelet[2527]: I0425 00:07:42.810436 2527 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-lfm6g" podStartSLOduration=30.704897111 podStartE2EDuration="48.810430366s" podCreationTimestamp="2026-04-25 00:06:54 +0000 UTC" firstStartedPulling="2026-04-25 00:07:21.217791283 +0000 UTC m=+45.894601512" lastFinishedPulling="2026-04-25 00:07:39.323324538 +0000 UTC m=+64.000134767" observedRunningTime="2026-04-25 00:07:39.800602392 +0000 UTC m=+64.477412626" watchObservedRunningTime="2026-04-25 00:07:42.810430366 +0000 UTC m=+67.487240598" Apr 25 00:07:47.827833 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:57936.service - OpenSSH per-connection server daemon (10.0.0.1:57936). Apr 25 00:07:47.873956 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 57936 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:47.875390 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:47.880122 systemd-logind[1458]: New session 13 of user core. Apr 25 00:07:47.887636 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 25 00:07:48.148283 sshd[5984]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:48.156841 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:57936.service: Deactivated successfully. Apr 25 00:07:48.158285 systemd[1]: session-13.scope: Deactivated successfully. Apr 25 00:07:48.159521 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Apr 25 00:07:48.170959 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:57948.service - OpenSSH per-connection server daemon (10.0.0.1:57948). Apr 25 00:07:48.173770 systemd-logind[1458]: Removed session 13. Apr 25 00:07:48.267715 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 57948 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:48.282980 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:48.306961 systemd-logind[1458]: New session 14 of user core. Apr 25 00:07:48.313631 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 25 00:07:49.133650 sshd[6006]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:49.145784 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:57948.service: Deactivated successfully. Apr 25 00:07:49.147381 systemd[1]: session-14.scope: Deactivated successfully. Apr 25 00:07:49.152475 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Apr 25 00:07:49.156857 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:57958.service - OpenSSH per-connection server daemon (10.0.0.1:57958). Apr 25 00:07:49.159721 systemd-logind[1458]: Removed session 14. Apr 25 00:07:49.210744 sshd[6050]: Accepted publickey for core from 10.0.0.1 port 57958 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:49.213642 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:49.218787 systemd-logind[1458]: New session 15 of user core. Apr 25 00:07:49.225553 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 25 00:07:49.379585 sshd[6050]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:49.382827 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:57958.service: Deactivated successfully. Apr 25 00:07:49.384912 systemd[1]: session-15.scope: Deactivated successfully. Apr 25 00:07:49.385701 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Apr 25 00:07:49.386514 systemd-logind[1458]: Removed session 15. Apr 25 00:07:54.414706 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:41952.service - OpenSSH per-connection server daemon (10.0.0.1:41952). Apr 25 00:07:54.462704 kubelet[2527]: E0425 00:07:54.461874 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:07:54.508721 sshd[6072]: Accepted publickey for core from 10.0.0.1 port 41952 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:07:54.516947 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:07:54.545165 systemd-logind[1458]: New session 16 of user core. Apr 25 00:07:54.555153 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 25 00:07:54.867850 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 25 00:07:54.872584 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:41952.service: Deactivated successfully. Apr 25 00:07:54.874609 systemd[1]: session-16.scope: Deactivated successfully. Apr 25 00:07:54.875160 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Apr 25 00:07:54.876027 systemd-logind[1458]: Removed session 16. Apr 25 00:07:59.918005 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:55292.service - OpenSSH per-connection server daemon (10.0.0.1:55292). Apr 25 00:08:00.095511 sshd[6109]: Accepted publickey for core from 10.0.0.1 port 55292 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:00.107125 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:00.147848 systemd-logind[1458]: New session 17 of user core. Apr 25 00:08:00.217367 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 25 00:08:00.769226 sshd[6109]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:00.782489 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:55292.service: Deactivated successfully. Apr 25 00:08:00.784064 systemd[1]: session-17.scope: Deactivated successfully. Apr 25 00:08:00.785462 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Apr 25 00:08:00.786552 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:55300.service - OpenSSH per-connection server daemon (10.0.0.1:55300). Apr 25 00:08:00.787223 systemd-logind[1458]: Removed session 17. Apr 25 00:08:00.845186 sshd[6123]: Accepted publickey for core from 10.0.0.1 port 55300 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:00.846604 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:00.851332 systemd-logind[1458]: New session 18 of user core. Apr 25 00:08:00.862461 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 25 00:08:01.286180 sshd[6123]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:01.294056 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:55300.service: Deactivated successfully. Apr 25 00:08:01.295652 systemd[1]: session-18.scope: Deactivated successfully. Apr 25 00:08:01.296316 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Apr 25 00:08:01.304655 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:55312.service - OpenSSH per-connection server daemon (10.0.0.1:55312). Apr 25 00:08:01.305544 systemd-logind[1458]: Removed session 18. Apr 25 00:08:01.354768 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 55312 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:01.358063 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:01.364645 systemd-logind[1458]: New session 19 of user core. Apr 25 00:08:01.370535 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 00:08:02.114483 sshd[6136]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:02.124229 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:55312.service: Deactivated successfully. Apr 25 00:08:02.132456 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 00:08:02.138453 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Apr 25 00:08:02.148018 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:55314.service - OpenSSH per-connection server daemon (10.0.0.1:55314). Apr 25 00:08:02.150002 systemd-logind[1458]: Removed session 19. Apr 25 00:08:02.188454 sshd[6163]: Accepted publickey for core from 10.0.0.1 port 55314 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:02.189610 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:02.193361 systemd-logind[1458]: New session 20 of user core. Apr 25 00:08:02.202576 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 00:08:02.805770 sshd[6163]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:02.826932 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:55314.service: Deactivated successfully. Apr 25 00:08:02.835576 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 00:08:02.850316 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Apr 25 00:08:02.861335 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:55322.service - OpenSSH per-connection server daemon (10.0.0.1:55322). Apr 25 00:08:02.866570 systemd-logind[1458]: Removed session 20. Apr 25 00:08:02.906642 sshd[6176]: Accepted publickey for core from 10.0.0.1 port 55322 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:02.908134 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:02.912262 systemd-logind[1458]: New session 21 of user core. Apr 25 00:08:02.925329 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 25 00:08:03.038066 sshd[6176]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:03.040819 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:55322.service: Deactivated successfully. Apr 25 00:08:03.042321 systemd[1]: session-21.scope: Deactivated successfully. Apr 25 00:08:03.042928 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Apr 25 00:08:03.043677 systemd-logind[1458]: Removed session 21. Apr 25 00:08:03.458460 kubelet[2527]: E0425 00:08:03.458166 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:08:04.466524 kubelet[2527]: E0425 00:08:04.466060 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:08:06.464102 kubelet[2527]: E0425 00:08:06.463368 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:08:08.078134 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:55324.service - OpenSSH per-connection server daemon (10.0.0.1:55324). Apr 25 00:08:08.256628 sshd[6226]: Accepted publickey for core from 10.0.0.1 port 55324 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:08.270720 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:08.300237 systemd-logind[1458]: New session 22 of user core. Apr 25 00:08:08.311490 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 25 00:08:08.603632 sshd[6226]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:08.606626 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:55324.service: Deactivated successfully. Apr 25 00:08:08.608554 systemd[1]: session-22.scope: Deactivated successfully. Apr 25 00:08:08.609692 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Apr 25 00:08:08.610527 systemd-logind[1458]: Removed session 22. Apr 25 00:08:13.621569 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:59184.service - OpenSSH per-connection server daemon (10.0.0.1:59184). Apr 25 00:08:13.814363 sshd[6251]: Accepted publickey for core from 10.0.0.1 port 59184 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:13.845002 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:14.013340 systemd-logind[1458]: New session 23 of user core. Apr 25 00:08:14.047981 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 25 00:08:15.968969 sshd[6251]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:15.980991 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:59184.service: Deactivated successfully. Apr 25 00:08:15.984445 systemd[1]: session-23.scope: Deactivated successfully. Apr 25 00:08:15.984806 systemd[1]: session-23.scope: Consumed 1.466s CPU time. Apr 25 00:08:16.012680 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Apr 25 00:08:16.030998 systemd-logind[1458]: Removed session 23. Apr 25 00:08:21.218747 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:33814.service - OpenSSH per-connection server daemon (10.0.0.1:33814). Apr 25 00:08:21.909799 sshd[6310]: Accepted publickey for core from 10.0.0.1 port 33814 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 25 00:08:21.985102 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:08:22.037698 systemd-logind[1458]: New session 24 of user core. Apr 25 00:08:22.050901 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 25 00:08:23.138718 sshd[6310]: pam_unix(sshd:session): session closed for user core Apr 25 00:08:23.157457 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:33814.service: Deactivated successfully. Apr 25 00:08:23.162685 systemd[1]: session-24.scope: Deactivated successfully. Apr 25 00:08:23.170421 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Apr 25 00:08:23.171506 systemd-logind[1458]: Removed session 24.