Apr 24 23:47:08.860273 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:47:08.860291 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:47:08.860301 kernel: BIOS-provided physical RAM map: Apr 24 23:47:08.860306 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 24 23:47:08.860312 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 24 23:47:08.860317 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 23:47:08.860323 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 24 23:47:08.860328 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 24 23:47:08.860333 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:47:08.860340 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 23:47:08.860345 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:47:08.860350 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 23:47:08.860366 kernel: NX (Execute Disable) protection: active Apr 24 23:47:08.860372 kernel: APIC: Static calls initialized Apr 24 23:47:08.860378 kernel: SMBIOS 2.8 present. Apr 24 23:47:08.860393 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 24 23:47:08.860399 kernel: Hypervisor detected: KVM Apr 24 23:47:08.860405 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:47:08.860411 kernel: kvm-clock: using sched offset of 4366885325 cycles Apr 24 23:47:08.860417 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:47:08.860423 kernel: tsc: Detected 2793.438 MHz processor Apr 24 23:47:08.860429 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:47:08.860435 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:47:08.860441 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 24 23:47:08.860448 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 23:47:08.860454 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:47:08.860459 kernel: Using GB pages for direct mapping Apr 24 23:47:08.860463 kernel: ACPI: Early table checksum verification disabled Apr 24 23:47:08.860468 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 24 23:47:08.860473 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860478 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860483 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860487 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 24 23:47:08.860494 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860498 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860503 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860508 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:47:08.860512 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 24 23:47:08.860517 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 24 23:47:08.860522 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 24 23:47:08.860530 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 24 23:47:08.860535 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 24 23:47:08.860540 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 24 23:47:08.860545 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 24 23:47:08.860550 kernel: No NUMA configuration found Apr 24 23:47:08.860556 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 24 23:47:08.860574 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 24 23:47:08.860580 kernel: Zone ranges: Apr 24 23:47:08.860585 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:47:08.860590 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 24 23:47:08.860595 kernel: Normal empty Apr 24 23:47:08.860600 kernel: Movable zone start for each node Apr 24 23:47:08.860605 kernel: Early memory node ranges Apr 24 23:47:08.860610 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 23:47:08.860615 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 24 23:47:08.860620 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 24 23:47:08.860625 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:47:08.860632 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 23:47:08.860643 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 24 23:47:08.860649 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:47:08.860654 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:47:08.860659 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:47:08.860664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:47:08.860670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:47:08.860675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:47:08.860680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:47:08.860686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:47:08.860691 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:47:08.860696 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:47:08.860701 kernel: TSC deadline timer available Apr 24 23:47:08.860707 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 24 23:47:08.860712 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:47:08.860717 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:47:08.860722 kernel: kvm-guest: setup PV sched yield Apr 24 23:47:08.860733 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 23:47:08.860740 kernel: Booting paravirtualized kernel on KVM Apr 24 23:47:08.860745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:47:08.860750 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 23:47:08.860755 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 24 23:47:08.860760 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 24 23:47:08.860765 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 23:47:08.860770 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:47:08.860775 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:47:08.860781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:47:08.860788 kernel: random: crng init done Apr 24 23:47:08.860793 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:47:08.860798 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:47:08.860803 kernel: Fallback order for Node 0: 0 Apr 24 23:47:08.860808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 24 23:47:08.860813 kernel: Policy zone: DMA32 Apr 24 23:47:08.860818 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:47:08.860824 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 24 23:47:08.860830 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 23:47:08.860835 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:47:08.860840 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:47:08.860845 kernel: Dynamic Preempt: voluntary Apr 24 23:47:08.860850 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:47:08.860856 kernel: rcu: RCU event tracing is enabled. Apr 24 23:47:08.860861 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 23:47:08.860866 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:47:08.860871 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:47:08.860938 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:47:08.860943 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:47:08.860948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 23:47:08.860953 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 23:47:08.860965 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:47:08.860971 kernel: Console: colour VGA+ 80x25 Apr 24 23:47:08.860975 kernel: printk: console [ttyS0] enabled Apr 24 23:47:08.860980 kernel: ACPI: Core revision 20230628 Apr 24 23:47:08.860986 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:47:08.860992 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:47:08.860997 kernel: x2apic enabled Apr 24 23:47:08.861002 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:47:08.861007 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:47:08.861013 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:47:08.861018 kernel: kvm-guest: setup PV IPIs Apr 24 23:47:08.861023 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:47:08.861028 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:47:08.861039 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 23:47:08.861045 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:47:08.861050 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 23:47:08.861056 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 23:47:08.861062 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:47:08.861068 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:47:08.861073 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:47:08.861079 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:47:08.861084 kernel: RETBleed: Vulnerable Apr 24 23:47:08.861091 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:47:08.861097 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:47:08.861109 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:47:08.861115 kernel: active return thunk: its_return_thunk Apr 24 23:47:08.861120 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:47:08.861126 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:47:08.861131 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:47:08.861137 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:47:08.861142 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:47:08.861149 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:47:08.861155 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:47:08.861160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:47:08.861166 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:47:08.861171 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:47:08.861177 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:47:08.861182 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:47:08.861188 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:47:08.861193 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:47:08.861200 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:47:08.861206 kernel: landlock: Up and running. Apr 24 23:47:08.861211 kernel: SELinux: Initializing. Apr 24 23:47:08.861217 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:47:08.861222 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:47:08.861228 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 23:47:08.861239 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:47:08.861245 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:47:08.861252 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:47:08.861258 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 23:47:08.861263 kernel: signal: max sigframe size: 3632 Apr 24 23:47:08.861269 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:47:08.861274 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:47:08.861280 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:47:08.861285 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:47:08.861291 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:47:08.861296 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 23:47:08.861303 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 23:47:08.861309 kernel: smpboot: Max logical packages: 1 Apr 24 23:47:08.861314 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 23:47:08.861320 kernel: devtmpfs: initialized Apr 24 23:47:08.861325 kernel: x86/mm: Memory block size: 128MB Apr 24 23:47:08.861331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:47:08.861336 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 23:47:08.861342 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:47:08.861348 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:47:08.861354 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:47:08.861360 kernel: audit: type=2000 audit(1777074428.044:1): state=initialized audit_enabled=0 res=1 Apr 24 23:47:08.861366 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:47:08.861371 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:47:08.861377 kernel: cpuidle: using governor menu Apr 24 23:47:08.861382 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:47:08.861388 kernel: dca service started, version 1.12.1 Apr 24 23:47:08.861393 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:47:08.861399 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:47:08.861406 kernel: PCI: Using configuration type 1 for base access Apr 24 23:47:08.861412 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:47:08.861417 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:47:08.861423 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:47:08.861429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:47:08.861434 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:47:08.861440 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:47:08.861445 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:47:08.861451 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:47:08.861458 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:47:08.861463 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:47:08.861469 kernel: ACPI: Interpreter enabled Apr 24 23:47:08.861474 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:47:08.861480 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:47:08.861486 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:47:08.861491 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:47:08.861497 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:47:08.861502 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:47:08.861701 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:47:08.861774 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:47:08.861834 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:47:08.861842 kernel: PCI host bridge to bus 0000:00 Apr 24 23:47:08.861949 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:47:08.862005 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:47:08.862063 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:47:08.862116 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 24 23:47:08.862169 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:47:08.862222 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 24 23:47:08.862275 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:47:08.862380 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:47:08.862471 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:47:08.862536 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 24 23:47:08.862628 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 24 23:47:08.862689 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 24 23:47:08.862748 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:47:08.862841 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 24 23:47:08.862943 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 24 23:47:08.863004 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 24 23:47:08.863067 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 23:47:08.863158 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 24 23:47:08.863220 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 24 23:47:08.863279 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 24 23:47:08.863340 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 23:47:08.863430 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:47:08.863490 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 24 23:47:08.863553 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 24 23:47:08.863653 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 24 23:47:08.863713 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 24 23:47:08.863804 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:47:08.863865 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:47:08.863973 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:47:08.864038 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 24 23:47:08.864097 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 24 23:47:08.864177 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:47:08.864239 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 24 23:47:08.864246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:47:08.864252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:47:08.864258 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:47:08.864263 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:47:08.864271 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:47:08.864277 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:47:08.864282 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:47:08.864287 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:47:08.864293 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:47:08.864299 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:47:08.864304 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:47:08.864310 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:47:08.864315 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:47:08.864322 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:47:08.864327 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:47:08.864333 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:47:08.864338 kernel: iommu: Default domain type: Translated Apr 24 23:47:08.864344 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:47:08.864350 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:47:08.864355 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:47:08.864361 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 24 23:47:08.864366 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 24 23:47:08.864427 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:47:08.864489 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:47:08.864549 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:47:08.864557 kernel: vgaarb: loaded Apr 24 23:47:08.864576 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:47:08.864582 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:47:08.864587 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:47:08.864593 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:47:08.864600 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:47:08.864606 kernel: pnp: PnP ACPI init Apr 24 23:47:08.864722 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:47:08.864730 kernel: pnp: PnP ACPI: found 6 devices Apr 24 23:47:08.864736 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:47:08.864742 kernel: NET: Registered PF_INET protocol family Apr 24 23:47:08.864747 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:47:08.864753 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:47:08.864759 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:47:08.864766 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:47:08.864772 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:47:08.864777 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:47:08.864783 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:47:08.864789 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:47:08.864794 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:47:08.864800 kernel: NET: Registered PF_XDP protocol family Apr 24 23:47:08.864857 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:47:08.864934 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:47:08.864988 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:47:08.865042 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 24 23:47:08.865095 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:47:08.865148 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 24 23:47:08.865155 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:47:08.865161 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:47:08.865167 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:47:08.865173 kernel: Initialise system trusted keyrings Apr 24 23:47:08.865180 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:47:08.865186 kernel: Key type asymmetric registered Apr 24 23:47:08.865191 kernel: Asymmetric key parser 'x509' registered Apr 24 23:47:08.865197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:47:08.865203 kernel: io scheduler mq-deadline registered Apr 24 23:47:08.865208 kernel: io scheduler kyber registered Apr 24 23:47:08.865214 kernel: io scheduler bfq registered Apr 24 23:47:08.865219 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:47:08.865225 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:47:08.865232 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:47:08.865238 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 23:47:08.865244 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:47:08.865249 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:47:08.865255 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:47:08.865260 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:47:08.865266 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:47:08.865272 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:47:08.865368 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 23:47:08.865430 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 23:47:08.865485 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T23:47:08 UTC (1777074428) Apr 24 23:47:08.865540 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:47:08.865547 kernel: intel_pstate: CPU model not supported Apr 24 23:47:08.865553 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:47:08.865558 kernel: Segment Routing with IPv6 Apr 24 23:47:08.865588 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:47:08.865599 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:47:08.865610 kernel: Key type dns_resolver registered Apr 24 23:47:08.865616 kernel: IPI shorthand broadcast: enabled Apr 24 23:47:08.865621 kernel: sched_clock: Marking stable (949007592, 179885400)->(1167702611, -38809619) Apr 24 23:47:08.865627 kernel: registered taskstats version 1 Apr 24 23:47:08.865633 kernel: Loading compiled-in X.509 certificates Apr 24 23:47:08.865639 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:47:08.865644 kernel: Key type .fscrypt registered Apr 24 23:47:08.865650 kernel: Key type fscrypt-provisioning registered Apr 24 23:47:08.865655 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:47:08.865663 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:47:08.865668 kernel: ima: No architecture policies found Apr 24 23:47:08.865674 kernel: clk: Disabling unused clocks Apr 24 23:47:08.865679 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:47:08.865685 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:47:08.865690 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:47:08.865696 kernel: Run /init as init process Apr 24 23:47:08.865701 kernel: with arguments: Apr 24 23:47:08.865707 kernel: /init Apr 24 23:47:08.865715 kernel: with environment: Apr 24 23:47:08.865720 kernel: HOME=/ Apr 24 23:47:08.865726 kernel: TERM=linux Apr 24 23:47:08.865733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:47:08.865741 systemd[1]: Detected virtualization kvm. Apr 24 23:47:08.865747 systemd[1]: Detected architecture x86-64. Apr 24 23:47:08.865752 systemd[1]: Running in initrd. Apr 24 23:47:08.865758 systemd[1]: No hostname configured, using default hostname. Apr 24 23:47:08.865765 systemd[1]: Hostname set to . Apr 24 23:47:08.865771 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:47:08.865776 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:47:08.865782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:47:08.865788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:47:08.865794 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:47:08.865800 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:47:08.865807 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:47:08.865814 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:47:08.865829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:47:08.865835 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:47:08.865841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:47:08.865849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:47:08.865855 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:47:08.865861 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:47:08.865867 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:47:08.865892 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:47:08.865899 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:47:08.865905 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:47:08.865936 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:47:08.865943 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:47:08.865952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:47:08.865958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:47:08.865964 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:47:08.865970 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:47:08.865976 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:47:08.865982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:47:08.865988 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:47:08.865994 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:47:08.866001 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:47:08.866007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:47:08.866013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:47:08.866019 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:47:08.866040 systemd-journald[194]: Collecting audit messages is disabled. Apr 24 23:47:08.866057 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:47:08.866063 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:47:08.866073 systemd-journald[194]: Journal started Apr 24 23:47:08.866088 systemd-journald[194]: Runtime Journal (/run/log/journal/43d4b73673ed44378f4968877cfc5f91) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:47:08.870652 systemd-modules-load[195]: Inserted module 'overlay' Apr 24 23:47:08.940843 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:47:08.940872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:47:08.940909 kernel: Bridge firewalling registered Apr 24 23:47:08.940916 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:47:08.892956 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 24 23:47:08.938497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:47:08.941486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:47:08.943843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:47:08.961030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:47:08.962202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:47:08.962967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:47:08.969323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:47:08.975654 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:47:08.976845 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:47:08.980239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:47:08.985050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:47:08.987607 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:47:08.991046 dracut-cmdline[228]: dracut-dracut-053 Apr 24 23:47:08.990640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:47:08.996101 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:47:09.017545 systemd-resolved[241]: Positive Trust Anchors: Apr 24 23:47:09.017584 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:47:09.017608 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:47:09.019510 systemd-resolved[241]: Defaulting to hostname 'linux'. Apr 24 23:47:09.020412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:47:09.021205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:47:09.079924 kernel: SCSI subsystem initialized Apr 24 23:47:09.087910 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:47:09.098928 kernel: iscsi: registered transport (tcp) Apr 24 23:47:09.116936 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:47:09.116968 kernel: QLogic iSCSI HBA Driver Apr 24 23:47:09.153305 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:47:09.163060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:47:09.184461 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:47:09.184641 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:47:09.184657 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:47:09.236917 kernel: raid6: avx512x4 gen() 43530 MB/s Apr 24 23:47:09.254179 kernel: raid6: avx512x2 gen() 45279 MB/s Apr 24 23:47:09.270945 kernel: raid6: avx512x1 gen() 46105 MB/s Apr 24 23:47:09.287922 kernel: raid6: avx2x4 gen() 37330 MB/s Apr 24 23:47:09.304916 kernel: raid6: avx2x2 gen() 37229 MB/s Apr 24 23:47:09.322491 kernel: raid6: avx2x1 gen() 27648 MB/s Apr 24 23:47:09.322517 kernel: raid6: using algorithm avx512x1 gen() 46105 MB/s Apr 24 23:47:09.340716 kernel: raid6: .... xor() 28059 MB/s, rmw enabled Apr 24 23:47:09.341527 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:47:09.362939 kernel: xor: automatically using best checksumming function avx Apr 24 23:47:09.506981 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:47:09.516685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:47:09.531026 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:47:09.540528 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 24 23:47:09.543187 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:47:09.546401 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:47:09.561251 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Apr 24 23:47:09.583199 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:47:09.594999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:47:09.627332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:47:09.636004 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:47:09.644125 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:47:09.647135 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:47:09.649087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:47:09.652096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:47:09.661510 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 23:47:09.662038 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:47:09.671216 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 23:47:09.678996 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:47:09.679028 kernel: GPT:9289727 != 19775487 Apr 24 23:47:09.679037 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:47:09.679045 kernel: GPT:9289727 != 19775487 Apr 24 23:47:09.679053 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:47:09.679062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:47:09.673121 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:47:09.686355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:47:09.687113 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:47:09.690781 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:47:09.693924 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:47:09.734117 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (459) Apr 24 23:47:09.730856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:47:09.733554 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:47:09.747830 kernel: libata version 3.00 loaded. Apr 24 23:47:09.734679 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:47:09.753238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:47:09.759087 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:47:09.759104 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Apr 24 23:47:09.759119 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:47:09.759243 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:47:09.761896 kernel: AES CTR mode by8 optimization enabled Apr 24 23:47:09.764507 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:47:09.764674 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:47:09.766927 kernel: scsi host0: ahci Apr 24 23:47:09.769658 kernel: scsi host1: ahci Apr 24 23:47:09.769818 kernel: scsi host2: ahci Apr 24 23:47:09.771338 kernel: scsi host3: ahci Apr 24 23:47:09.771476 kernel: scsi host4: ahci Apr 24 23:47:09.774653 kernel: scsi host5: ahci Apr 24 23:47:09.774811 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 24 23:47:09.774821 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 24 23:47:09.774829 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 24 23:47:09.774418 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 23:47:09.781747 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 24 23:47:09.781765 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 24 23:47:09.781772 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 24 23:47:09.783422 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 23:47:09.856387 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 23:47:09.857454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:47:09.864733 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:47:09.868551 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 23:47:09.881050 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:47:09.882204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:47:09.893840 disk-uuid[554]: Primary Header is updated. Apr 24 23:47:09.893840 disk-uuid[554]: Secondary Entries is updated. Apr 24 23:47:09.893840 disk-uuid[554]: Secondary Header is updated. Apr 24 23:47:09.898926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:47:09.903906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:47:09.904063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:47:09.908915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:47:10.090977 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:47:10.091469 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:47:10.091480 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:47:10.092915 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 23:47:10.094353 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 23:47:10.094364 kernel: ata3.00: applying bridge limits Apr 24 23:47:10.095902 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:47:10.096901 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:47:10.097991 kernel: ata3.00: configured for UDMA/100 Apr 24 23:47:10.099948 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:47:10.149973 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 23:47:10.151046 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:47:10.164902 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:47:10.905910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:47:10.906412 disk-uuid[555]: The operation has completed successfully. Apr 24 23:47:10.930698 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:47:10.930792 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:47:10.946076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:47:10.950736 sh[592]: Success Apr 24 23:47:10.963910 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:47:10.989682 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:47:11.008083 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:47:11.010465 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:47:11.023535 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:47:11.023566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:47:11.023575 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:47:11.024908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:47:11.026648 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:47:11.030764 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:47:11.031672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:47:11.051901 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:47:11.053400 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:47:11.068389 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:47:11.068432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:47:11.068442 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:47:11.071916 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:47:11.079329 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:47:11.081826 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:47:11.086547 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:47:11.094063 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:47:11.184686 ignition[695]: Ignition 2.19.0 Apr 24 23:47:11.184702 ignition[695]: Stage: fetch-offline Apr 24 23:47:11.184742 ignition[695]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:47:11.184750 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:47:11.184851 ignition[695]: parsed url from cmdline: "" Apr 24 23:47:11.184853 ignition[695]: no config URL provided Apr 24 23:47:11.184857 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:47:11.184863 ignition[695]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:47:11.184910 ignition[695]: op(1): [started] loading QEMU firmware config module Apr 24 23:47:11.184913 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 23:47:11.191074 ignition[695]: op(1): [finished] loading QEMU firmware config module Apr 24 23:47:11.200416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:47:11.211009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:47:11.228384 systemd-networkd[782]: lo: Link UP Apr 24 23:47:11.228400 systemd-networkd[782]: lo: Gained carrier Apr 24 23:47:11.229454 systemd-networkd[782]: Enumeration completed Apr 24 23:47:11.229520 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:47:11.230514 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:47:11.230516 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:47:11.231856 systemd[1]: Reached target network.target - Network. Apr 24 23:47:11.231867 systemd-networkd[782]: eth0: Link UP Apr 24 23:47:11.231888 systemd-networkd[782]: eth0: Gained carrier Apr 24 23:47:11.231895 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:47:11.251378 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:47:11.310335 ignition[695]: parsing config with SHA512: d93a6e2406eda9d3d0e935386bd1bd2400b1bba1dd2f72c307b3e86fb408f600aa7e0ba557d44c6bedfc271941fdce2feb6d15f63185e1684a083b2482c5c929 Apr 24 23:47:11.316994 unknown[695]: fetched base config from "system" Apr 24 23:47:11.317011 unknown[695]: fetched user config from "qemu" Apr 24 23:47:11.317586 ignition[695]: fetch-offline: fetch-offline passed Apr 24 23:47:11.319807 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:47:11.317749 ignition[695]: Ignition finished successfully Apr 24 23:47:11.322015 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 23:47:11.332082 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:47:11.344319 ignition[786]: Ignition 2.19.0 Apr 24 23:47:11.344331 ignition[786]: Stage: kargs Apr 24 23:47:11.344456 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:47:11.344463 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:47:11.345376 ignition[786]: kargs: kargs passed Apr 24 23:47:11.345408 ignition[786]: Ignition finished successfully Apr 24 23:47:11.352110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:47:11.366026 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:47:11.484375 kernel: hrtimer: interrupt took 4355822 ns Apr 24 23:47:11.516716 ignition[793]: Ignition 2.19.0 Apr 24 23:47:11.516735 ignition[793]: Stage: disks Apr 24 23:47:11.517069 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:47:11.517082 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:47:11.519057 ignition[793]: disks: disks passed Apr 24 23:47:11.519172 ignition[793]: Ignition finished successfully Apr 24 23:47:11.525850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:47:11.528703 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:47:11.530210 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:47:11.531797 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:47:11.532310 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:47:11.534834 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:47:11.551653 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:47:11.565826 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:47:11.569463 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:47:11.571417 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:47:11.658904 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:47:11.659357 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:47:11.661014 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:47:11.670963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:47:11.672994 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:47:11.674964 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:47:11.685326 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Apr 24 23:47:11.685346 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:47:11.685355 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:47:11.685363 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:47:11.685371 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:47:11.674992 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:47:11.675008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:47:11.679338 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:47:11.694024 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:47:11.695421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:47:11.725569 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:47:11.732103 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:47:11.736401 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:47:11.742409 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:47:11.829971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:47:11.841975 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:47:11.847633 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:47:11.849955 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:47:11.868094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:47:11.887055 ignition[925]: INFO : Ignition 2.19.0 Apr 24 23:47:11.887055 ignition[925]: INFO : Stage: mount Apr 24 23:47:11.890336 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:47:11.890336 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:47:11.890336 ignition[925]: INFO : mount: mount passed Apr 24 23:47:11.890336 ignition[925]: INFO : Ignition finished successfully Apr 24 23:47:11.888808 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:47:11.902010 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:47:12.023047 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:47:12.036057 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:47:12.044137 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Apr 24 23:47:12.046654 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:47:12.046668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:47:12.046676 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:47:12.052006 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:47:12.053849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:47:12.084187 ignition[956]: INFO : Ignition 2.19.0 Apr 24 23:47:12.084187 ignition[956]: INFO : Stage: files Apr 24 23:47:12.086298 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:47:12.086298 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:47:12.086298 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:47:12.086298 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:47:12.086298 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:47:12.095433 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:47:12.095433 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:47:12.095433 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:47:12.095433 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:47:12.095433 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:47:12.088537 unknown[956]: wrote ssh authorized keys file for user: core Apr 24 23:47:12.158219 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:47:12.272726 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:47:12.272726 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:47:12.278089 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:47:12.567309 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:47:12.688476 systemd-networkd[782]: eth0: Gained IPv6LL Apr 24 23:47:13.421155 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:47:13.421155 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:47:13.426952 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 23:47:13.448199 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:47:13.451801 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:47:13.454008 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 23:47:13.454008 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:47:13.454008 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:47:13.454008 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:47:13.454008 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:47:13.454008 ignition[956]: INFO : files: files passed Apr 24 23:47:13.454008 ignition[956]: INFO : Ignition finished successfully Apr 24 23:47:13.454784 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:47:13.476010 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:47:13.480265 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:47:13.481011 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:47:13.481075 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:47:13.490255 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 23:47:13.493008 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:47:13.493008 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:47:13.496987 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:47:13.499081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:47:13.502556 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:47:13.520009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:47:13.540050 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:47:13.540146 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:47:13.542954 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:47:13.545554 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:47:13.549494 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:47:13.564065 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:47:13.574484 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:47:13.578353 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:47:13.588644 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:47:13.589325 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:47:13.592346 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:47:13.594864 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:47:13.594972 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:47:13.599188 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:47:13.601903 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:47:13.604137 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:47:13.606526 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:47:13.609218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:47:13.612007 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:47:13.614594 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:47:13.617273 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:47:13.620031 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:47:13.622437 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:47:13.624589 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:47:13.624731 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:47:13.628238 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:47:13.628974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:47:13.632484 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:47:13.636008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:47:13.636555 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:47:13.636673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:47:13.641673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:47:13.641768 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:47:13.644324 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:47:13.645151 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:47:13.649244 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:47:13.650089 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:47:13.653451 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:47:13.658020 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:47:13.658100 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:47:13.660214 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:47:13.660304 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:47:13.662426 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:47:13.662526 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:47:13.665089 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:47:13.665252 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:47:13.686042 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:47:13.686576 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:47:13.686717 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:47:13.689544 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:47:13.695030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:47:13.695145 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:47:13.699527 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:47:13.700904 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:47:13.706150 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:47:13.706231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:47:13.708673 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:47:13.715555 ignition[1011]: INFO : Ignition 2.19.0 Apr 24 23:47:13.715555 ignition[1011]: INFO : Stage: umount Apr 24 23:47:13.717711 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:47:13.717711 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:47:13.717711 ignition[1011]: INFO : umount: umount passed Apr 24 23:47:13.717711 ignition[1011]: INFO : Ignition finished successfully Apr 24 23:47:13.718151 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:47:13.718239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:47:13.720661 systemd[1]: Stopped target network.target - Network. Apr 24 23:47:13.723960 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:47:13.724016 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:47:13.724423 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:47:13.724450 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:47:13.727189 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:47:13.727220 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:47:13.730376 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:47:13.730412 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:47:13.732949 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:47:13.735105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:47:13.744640 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:47:13.744797 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:47:13.747286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:47:13.747334 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:47:13.747563 systemd-networkd[782]: eth0: DHCPv6 lease lost Apr 24 23:47:13.755129 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:47:13.755283 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:47:13.758053 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:47:13.758130 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:47:13.760689 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:47:13.760722 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:47:13.763273 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:47:13.763337 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:47:13.783021 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:47:13.783549 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:47:13.783589 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:47:13.785835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:47:13.785868 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:47:13.788536 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:47:13.788565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:47:13.793455 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:47:13.802180 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:47:13.802277 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:47:13.821814 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:47:13.822119 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:47:13.823033 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:47:13.823066 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:47:13.826648 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:47:13.826714 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:47:13.829188 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:47:13.829223 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:47:13.833078 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:47:13.833111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:47:13.837167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:47:13.837207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:47:13.854825 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:47:13.856279 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:47:13.856382 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:47:13.858041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:47:13.858085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:47:13.860445 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:47:13.860508 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:47:13.863233 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:47:13.866466 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:47:13.876082 systemd[1]: Switching root. Apr 24 23:47:13.905420 systemd-journald[194]: Journal stopped Apr 24 23:47:14.656219 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 24 23:47:14.656269 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:47:14.656287 kernel: SELinux: policy capability open_perms=1 Apr 24 23:47:14.656295 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:47:14.656306 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:47:14.656317 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:47:14.656327 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:47:14.656336 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:47:14.656345 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:47:14.656355 kernel: audit: type=1403 audit(1777074434.015:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:47:14.656364 systemd[1]: Successfully loaded SELinux policy in 32.566ms. Apr 24 23:47:14.656379 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.037ms. Apr 24 23:47:14.656388 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:47:14.656398 systemd[1]: Detected virtualization kvm. Apr 24 23:47:14.656406 systemd[1]: Detected architecture x86-64. Apr 24 23:47:14.656414 systemd[1]: Detected first boot. Apr 24 23:47:14.656424 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:47:14.656433 zram_generator::config[1056]: No configuration found. Apr 24 23:47:14.656442 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:47:14.656450 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:47:14.656458 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:47:14.656467 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:47:14.656475 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:47:14.656483 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:47:14.656491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:47:14.656514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:47:14.656524 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:47:14.656532 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:47:14.656540 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:47:14.656549 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:47:14.656556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:47:14.656564 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:47:14.656573 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:47:14.656583 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:47:14.656592 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:47:14.656600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:47:14.656608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:47:14.656616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:47:14.656624 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:47:14.656631 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:47:14.656653 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:47:14.656664 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:47:14.656672 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:47:14.656680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:47:14.656688 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:47:14.656697 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:47:14.656707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:47:14.656715 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:47:14.656723 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:47:14.656732 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:47:14.656742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:47:14.656750 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:47:14.656758 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:47:14.656767 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:47:14.656775 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:47:14.656785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:47:14.656793 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:47:14.656801 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:47:14.656809 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:47:14.656819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:47:14.656827 systemd[1]: Reached target machines.target - Containers. Apr 24 23:47:14.656835 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:47:14.656843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:47:14.656851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:47:14.656859 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:47:14.656867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:47:14.656893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:47:14.656903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:47:14.656911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:47:14.656919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:47:14.656927 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:47:14.656935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:47:14.656944 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:47:14.656952 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:47:14.656960 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:47:14.656968 kernel: fuse: init (API version 7.39) Apr 24 23:47:14.656979 kernel: loop: module loaded Apr 24 23:47:14.656986 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:47:14.656994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:47:14.657002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:47:14.657010 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:47:14.657019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:47:14.657027 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:47:14.657035 kernel: ACPI: bus type drm_connector registered Apr 24 23:47:14.657043 systemd[1]: Stopped verity-setup.service. Apr 24 23:47:14.657066 systemd-journald[1134]: Collecting audit messages is disabled. Apr 24 23:47:14.657084 systemd-journald[1134]: Journal started Apr 24 23:47:14.657100 systemd-journald[1134]: Runtime Journal (/run/log/journal/43d4b73673ed44378f4968877cfc5f91) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:47:14.407839 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:47:14.422663 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 23:47:14.423062 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:47:14.661917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:47:14.667111 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:47:14.667539 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:47:14.669065 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:47:14.670654 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:47:14.672013 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:47:14.673491 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:47:14.677031 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:47:14.679020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:47:14.680928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:47:14.682851 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:47:14.683014 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:47:14.684772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:47:14.685183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:47:14.686945 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:47:14.687099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:47:14.688689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:47:14.688853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:47:14.690622 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:47:14.690762 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:47:14.692397 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:47:14.692529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:47:14.694319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:47:14.695991 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:47:14.697801 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:47:14.709720 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:47:14.719015 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:47:14.721308 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:47:14.722820 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:47:14.722846 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:47:14.724917 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:47:14.727305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:47:14.729617 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:47:14.731986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:47:14.736355 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:47:14.738781 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:47:14.740355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:47:14.741911 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:47:14.744534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:47:14.750085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:47:14.755005 systemd-journald[1134]: Time spent on flushing to /var/log/journal/43d4b73673ed44378f4968877cfc5f91 is 86.041ms for 950 entries. Apr 24 23:47:14.755005 systemd-journald[1134]: System Journal (/var/log/journal/43d4b73673ed44378f4968877cfc5f91) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:47:14.876165 systemd-journald[1134]: Received client request to flush runtime journal. Apr 24 23:47:14.876213 kernel: loop0: detected capacity change from 0 to 142488 Apr 24 23:47:14.756139 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:47:14.762478 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:47:14.765262 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:47:14.767053 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:47:14.770857 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:47:14.772722 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:47:14.849988 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:47:14.852432 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:47:14.853514 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:47:14.860082 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:47:14.874057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:47:14.881216 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:47:14.890222 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:47:14.895937 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:47:14.905776 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:47:14.906378 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:47:14.916171 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:47:14.924314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:47:14.924985 kernel: loop1: detected capacity change from 0 to 140768 Apr 24 23:47:15.010434 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 24 23:47:15.010781 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 24 23:47:15.094952 kernel: loop2: detected capacity change from 0 to 228704 Apr 24 23:47:15.102405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:47:15.135945 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:47:15.156922 kernel: loop4: detected capacity change from 0 to 140768 Apr 24 23:47:15.171994 kernel: loop5: detected capacity change from 0 to 228704 Apr 24 23:47:15.183165 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 23:47:15.185229 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 24 23:47:15.190709 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:47:15.190732 systemd[1]: Reloading... Apr 24 23:47:15.276046 zram_generator::config[1220]: No configuration found. Apr 24 23:47:15.325266 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:47:15.370004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:47:15.401046 systemd[1]: Reloading finished in 209 ms. Apr 24 23:47:15.579740 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:47:15.581755 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:47:15.595021 systemd[1]: Starting ensure-sysext.service... Apr 24 23:47:15.596937 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:47:15.603124 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:47:15.603148 systemd[1]: Reloading... Apr 24 23:47:15.621864 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:47:15.622178 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:47:15.622762 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:47:15.622977 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 24 23:47:15.623019 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 24 23:47:15.629488 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:47:15.629566 systemd-tmpfiles[1258]: Skipping /boot Apr 24 23:47:15.635302 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:47:15.635391 systemd-tmpfiles[1258]: Skipping /boot Apr 24 23:47:15.639201 zram_generator::config[1284]: No configuration found. Apr 24 23:47:15.755830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:47:15.785392 systemd[1]: Reloading finished in 182 ms. Apr 24 23:47:15.801917 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:47:15.815335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:47:15.822537 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:47:15.825277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:47:15.826720 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:47:15.830110 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:47:15.834438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:47:15.837509 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:47:15.840472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:47:15.840577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:47:15.844084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:47:15.847215 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:47:15.851961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:47:15.853440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:47:15.855622 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:47:15.859150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:47:15.860642 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:47:15.861472 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Apr 24 23:47:15.863017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:47:15.863233 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:47:15.865254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:47:15.865414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:47:15.867428 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:47:15.867535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:47:15.875843 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:47:15.876237 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:47:15.877596 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:47:15.879528 augenrules[1351]: No rules Apr 24 23:47:15.880123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:47:15.882395 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:47:15.885829 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:47:15.892518 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:47:15.900962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:47:15.901101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:47:15.909090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:47:15.912475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:47:15.916988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:47:15.919750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:47:15.922021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:47:15.930073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:47:15.931461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:47:15.932078 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:47:15.940126 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:47:15.942248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:47:15.942363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:47:15.944280 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:47:15.944367 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:47:15.946179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:47:15.946265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:47:15.951554 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:47:15.951706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:47:15.961423 systemd[1]: Finished ensure-sysext.service. Apr 24 23:47:15.966909 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1381) Apr 24 23:47:15.968353 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:47:15.969607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:47:15.969670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:47:15.976150 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:47:16.028853 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:47:16.056931 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:47:16.068108 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:47:16.075065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:47:16.084052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:47:16.094115 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:47:16.103632 systemd-resolved[1327]: Positive Trust Anchors: Apr 24 23:47:16.103905 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:47:16.103996 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:47:16.107506 systemd-resolved[1327]: Defaulting to hostname 'linux'. Apr 24 23:47:16.107905 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:47:16.108060 systemd-networkd[1391]: lo: Link UP Apr 24 23:47:16.108074 systemd-networkd[1391]: lo: Gained carrier Apr 24 23:47:16.109005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:47:16.110506 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:47:16.111121 systemd-networkd[1391]: Enumeration completed Apr 24 23:47:16.111505 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:47:16.111508 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:47:16.112094 systemd-networkd[1391]: eth0: Link UP Apr 24 23:47:16.112096 systemd-networkd[1391]: eth0: Gained carrier Apr 24 23:47:16.112106 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:47:16.112136 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:47:16.113573 systemd[1]: Reached target network.target - Network. Apr 24 23:47:16.121149 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:47:16.123054 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:47:16.125025 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:47:16.125084 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:47:16.126975 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Apr 24 23:47:16.581666 systemd-resolved[1327]: Clock change detected. Flushing caches. Apr 24 23:47:16.581825 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 23:47:16.581877 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2026-04-24 23:47:16.581628 UTC. Apr 24 23:47:16.592207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:47:16.599643 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:47:16.599883 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:47:16.600007 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:47:16.606442 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:47:16.736220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:47:16.744538 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:47:16.754974 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:47:16.772381 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:47:16.809327 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:47:16.811591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:47:16.813126 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:47:16.814682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:47:16.816286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:47:16.818094 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:47:16.819510 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:47:16.821100 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:47:16.822744 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:47:16.822813 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:47:16.824037 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:47:16.826729 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:47:16.829438 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:47:16.840259 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:47:16.842627 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:47:16.844421 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:47:16.846035 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:47:16.846548 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:47:16.848484 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:47:16.848512 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:47:16.849271 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:47:16.850799 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:47:16.851357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:47:16.854074 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:47:16.858936 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:47:16.860407 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:47:16.861608 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:47:16.864493 jq[1429]: false Apr 24 23:47:16.865995 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:47:16.868962 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:47:16.871925 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:47:16.876991 extend-filesystems[1430]: Found loop3 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found loop4 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found loop5 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found sr0 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda1 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda2 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda3 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found usr Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda4 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda6 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda7 Apr 24 23:47:16.878092 extend-filesystems[1430]: Found vda9 Apr 24 23:47:16.878092 extend-filesystems[1430]: Checking size of /dev/vda9 Apr 24 23:47:16.879038 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:47:16.880474 dbus-daemon[1428]: [system] SELinux support is enabled Apr 24 23:47:16.884383 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:47:16.884708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:47:16.885736 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:47:16.895935 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:47:16.898849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:47:16.924655 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:47:16.937687 extend-filesystems[1430]: Resized partition /dev/vda9 Apr 24 23:47:16.938756 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:47:16.939021 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:47:16.939223 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:47:16.939331 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:47:16.958609 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:47:16.959097 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:47:16.963366 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:47:16.967808 jq[1447]: true Apr 24 23:47:16.971231 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 23:47:16.976227 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1384) Apr 24 23:47:17.022090 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:47:17.022120 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:47:17.022449 systemd-logind[1438]: New seat seat0. Apr 24 23:47:17.028725 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:47:17.037259 jq[1458]: true Apr 24 23:47:17.038276 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:47:17.038493 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:47:17.040915 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:47:17.040998 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:47:17.044261 update_engine[1445]: I20260424 23:47:17.044173 1445 main.cc:92] Flatcar Update Engine starting Apr 24 23:47:17.046910 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 23:47:17.044624 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:47:17.060446 update_engine[1445]: I20260424 23:47:17.052021 1445 update_check_scheduler.cc:74] Next update check in 4m15s Apr 24 23:47:17.050700 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:47:17.060592 tar[1453]: linux-amd64/LICENSE Apr 24 23:47:17.060592 tar[1453]: linux-amd64/helm Apr 24 23:47:17.067400 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:47:17.067465 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 23:47:17.067465 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 23:47:17.067465 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 23:47:17.062908 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:47:17.072426 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Apr 24 23:47:17.064725 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:47:17.064906 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:47:17.089233 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:47:17.104289 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:47:17.111686 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:47:17.111983 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:47:17.112165 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:47:17.116195 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:47:17.125439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:47:17.127344 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:47:17.172928 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:47:17.180868 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:47:17.189143 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:47:17.191627 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:47:17.193326 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:47:17.567929 containerd[1460]: time="2026-04-24T23:47:17.567742214Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:47:17.594006 containerd[1460]: time="2026-04-24T23:47:17.593790162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599016 containerd[1460]: time="2026-04-24T23:47:17.598967717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599016 containerd[1460]: time="2026-04-24T23:47:17.599004091Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:47:17.599121 containerd[1460]: time="2026-04-24T23:47:17.599027915Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:47:17.599307 containerd[1460]: time="2026-04-24T23:47:17.599276250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:47:17.599307 containerd[1460]: time="2026-04-24T23:47:17.599304191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599426 containerd[1460]: time="2026-04-24T23:47:17.599409187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599459 containerd[1460]: time="2026-04-24T23:47:17.599426729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599699 containerd[1460]: time="2026-04-24T23:47:17.599666158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599699 containerd[1460]: time="2026-04-24T23:47:17.599691785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599740 containerd[1460]: time="2026-04-24T23:47:17.599702295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599740 containerd[1460]: time="2026-04-24T23:47:17.599709759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.599819 containerd[1460]: time="2026-04-24T23:47:17.599804309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.600075 containerd[1460]: time="2026-04-24T23:47:17.600022248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:47:17.600152 containerd[1460]: time="2026-04-24T23:47:17.600141593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:47:17.600182 containerd[1460]: time="2026-04-24T23:47:17.600152496Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:47:17.600269 containerd[1460]: time="2026-04-24T23:47:17.600221509Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:47:17.600394 containerd[1460]: time="2026-04-24T23:47:17.600339473Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:47:17.612024 containerd[1460]: time="2026-04-24T23:47:17.611837412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:47:17.612024 containerd[1460]: time="2026-04-24T23:47:17.612060610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:47:17.612797 containerd[1460]: time="2026-04-24T23:47:17.612082491Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:47:17.612797 containerd[1460]: time="2026-04-24T23:47:17.612105544Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:47:17.612797 containerd[1460]: time="2026-04-24T23:47:17.612134645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:47:17.612797 containerd[1460]: time="2026-04-24T23:47:17.612523612Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:47:17.613148 containerd[1460]: time="2026-04-24T23:47:17.613105979Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:47:17.613373 containerd[1460]: time="2026-04-24T23:47:17.613276673Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:47:17.613373 containerd[1460]: time="2026-04-24T23:47:17.613365049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:47:17.613429 containerd[1460]: time="2026-04-24T23:47:17.613381370Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:47:17.613429 containerd[1460]: time="2026-04-24T23:47:17.613394406Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613429 containerd[1460]: time="2026-04-24T23:47:17.613419087Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613481 containerd[1460]: time="2026-04-24T23:47:17.613436517Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613495 containerd[1460]: time="2026-04-24T23:47:17.613479335Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613511 containerd[1460]: time="2026-04-24T23:47:17.613491526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613535 containerd[1460]: time="2026-04-24T23:47:17.613511732Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613535 containerd[1460]: time="2026-04-24T23:47:17.613526547Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613562 containerd[1460]: time="2026-04-24T23:47:17.613536065Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:47:17.613590 containerd[1460]: time="2026-04-24T23:47:17.613584530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613605 containerd[1460]: time="2026-04-24T23:47:17.613600993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613631 containerd[1460]: time="2026-04-24T23:47:17.613614327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613654 containerd[1460]: time="2026-04-24T23:47:17.613647020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613669 containerd[1460]: time="2026-04-24T23:47:17.613658637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613683 containerd[1460]: time="2026-04-24T23:47:17.613671233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613696 containerd[1460]: time="2026-04-24T23:47:17.613684841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613720 containerd[1460]: time="2026-04-24T23:47:17.613702299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613720 containerd[1460]: time="2026-04-24T23:47:17.613713735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613832 containerd[1460]: time="2026-04-24T23:47:17.613809144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613832 containerd[1460]: time="2026-04-24T23:47:17.613825242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613860 containerd[1460]: time="2026-04-24T23:47:17.613841815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613881 containerd[1460]: time="2026-04-24T23:47:17.613864038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613933 containerd[1460]: time="2026-04-24T23:47:17.613912162Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:47:17.613978 containerd[1460]: time="2026-04-24T23:47:17.613959029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.613978 containerd[1460]: time="2026-04-24T23:47:17.613970076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.614007 containerd[1460]: time="2026-04-24T23:47:17.613978009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:47:17.614079 containerd[1460]: time="2026-04-24T23:47:17.614060166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614175353Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614186988Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614196009Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614203071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614220568Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614238115Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:47:17.615681 containerd[1460]: time="2026-04-24T23:47:17.614254147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:47:17.615863 containerd[1460]: time="2026-04-24T23:47:17.615506607Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:47:17.615863 containerd[1460]: time="2026-04-24T23:47:17.615695604Z" level=info msg="Connect containerd service" Apr 24 23:47:17.615863 containerd[1460]: time="2026-04-24T23:47:17.615824533Z" level=info msg="using legacy CRI server" Apr 24 23:47:17.615863 containerd[1460]: time="2026-04-24T23:47:17.615836453Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:47:17.616341 containerd[1460]: time="2026-04-24T23:47:17.616305211Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:47:17.617735 containerd[1460]: time="2026-04-24T23:47:17.617694681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:47:17.618193 containerd[1460]: time="2026-04-24T23:47:17.618126190Z" level=info msg="Start subscribing containerd event" Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618227809Z" level=info msg="Start recovering state" Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618306174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618362492Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618472964Z" level=info msg="Start event monitor" Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618496229Z" level=info msg="Start snapshots syncer" Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618524141Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:47:17.618855 containerd[1460]: time="2026-04-24T23:47:17.618541210Z" level=info msg="Start streaming server" Apr 24 23:47:17.619316 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:47:17.622453 containerd[1460]: time="2026-04-24T23:47:17.622389415Z" level=info msg="containerd successfully booted in 0.055614s" Apr 24 23:47:17.720813 tar[1453]: linux-amd64/README.md Apr 24 23:47:17.739163 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:47:17.746199 systemd-networkd[1391]: eth0: Gained IPv6LL Apr 24 23:47:17.748430 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:47:17.750521 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:47:17.761980 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 23:47:17.767467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:17.770118 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:47:17.787134 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 23:47:17.787295 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 23:47:17.789132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:47:17.794453 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:47:19.298832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:19.300925 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:47:19.304404 systemd[1]: Startup finished in 1.071s (kernel) + 5.331s (initrd) + 4.866s (userspace) = 11.269s. Apr 24 23:47:19.335091 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:47:20.149172 kubelet[1540]: E0424 23:47:20.148664 1540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:47:20.152567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:47:20.152876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:47:20.153534 systemd[1]: kubelet.service: Consumed 2.052s CPU time. Apr 24 23:47:22.674240 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:47:22.675600 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:57612.service - OpenSSH per-connection server daemon (10.0.0.1:57612). Apr 24 23:47:22.752915 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 57612 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:22.755054 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:22.764732 systemd-logind[1438]: New session 1 of user core. Apr 24 23:47:22.765530 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:47:22.779548 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:47:22.790889 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:47:22.792937 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:47:22.800129 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:47:22.899148 systemd[1557]: Queued start job for default target default.target. Apr 24 23:47:22.911255 systemd[1557]: Created slice app.slice - User Application Slice. Apr 24 23:47:22.911300 systemd[1557]: Reached target paths.target - Paths. Apr 24 23:47:22.911312 systemd[1557]: Reached target timers.target - Timers. Apr 24 23:47:22.913835 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:47:22.929668 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:47:22.929801 systemd[1557]: Reached target sockets.target - Sockets. Apr 24 23:47:22.929814 systemd[1557]: Reached target basic.target - Basic System. Apr 24 23:47:22.929843 systemd[1557]: Reached target default.target - Main User Target. Apr 24 23:47:22.929868 systemd[1557]: Startup finished in 118ms. Apr 24 23:47:22.930175 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:47:22.931969 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:47:23.034767 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:57622.service - OpenSSH per-connection server daemon (10.0.0.1:57622). Apr 24 23:47:23.072432 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 57622 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:23.074675 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:23.080131 systemd-logind[1438]: New session 2 of user core. Apr 24 23:47:23.089624 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:47:23.151625 sshd[1568]: pam_unix(sshd:session): session closed for user core Apr 24 23:47:23.160108 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:57622.service: Deactivated successfully. Apr 24 23:47:23.161356 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:47:23.162480 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:47:23.163457 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). Apr 24 23:47:23.164316 systemd-logind[1438]: Removed session 2. Apr 24 23:47:23.201153 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:23.206977 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:23.212697 systemd-logind[1438]: New session 3 of user core. Apr 24 23:47:23.220926 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:47:23.268901 sshd[1575]: pam_unix(sshd:session): session closed for user core Apr 24 23:47:23.277591 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:57632.service: Deactivated successfully. Apr 24 23:47:23.278988 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:47:23.280116 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:47:23.287020 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:57648.service - OpenSSH per-connection server daemon (10.0.0.1:57648). Apr 24 23:47:23.287880 systemd-logind[1438]: Removed session 3. Apr 24 23:47:23.316884 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 57648 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:23.318642 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:23.324335 systemd-logind[1438]: New session 4 of user core. Apr 24 23:47:23.333912 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:47:23.422893 sshd[1582]: pam_unix(sshd:session): session closed for user core Apr 24 23:47:23.439327 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:57648.service: Deactivated successfully. Apr 24 23:47:23.440877 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:47:23.442733 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:47:23.456309 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:57650.service - OpenSSH per-connection server daemon (10.0.0.1:57650). Apr 24 23:47:23.458069 systemd-logind[1438]: Removed session 4. Apr 24 23:47:23.487266 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 57650 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:23.488322 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:23.491912 systemd-logind[1438]: New session 5 of user core. Apr 24 23:47:23.507069 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:47:23.587372 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:47:23.587825 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:47:23.612808 sudo[1592]: pam_unix(sudo:session): session closed for user root Apr 24 23:47:23.617480 sshd[1589]: pam_unix(sshd:session): session closed for user core Apr 24 23:47:23.638690 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:57650.service: Deactivated successfully. Apr 24 23:47:23.640904 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:47:23.642303 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:47:23.653943 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:57666.service - OpenSSH per-connection server daemon (10.0.0.1:57666). Apr 24 23:47:23.656222 systemd-logind[1438]: Removed session 5. Apr 24 23:47:23.691046 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 57666 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:23.692555 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:23.697763 systemd-logind[1438]: New session 6 of user core. Apr 24 23:47:23.708997 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:47:23.769490 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:47:23.769809 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:47:23.773813 sudo[1601]: pam_unix(sudo:session): session closed for user root Apr 24 23:47:23.782241 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:47:23.782463 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:47:23.803382 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:47:23.807918 auditctl[1604]: No rules Apr 24 23:47:23.808255 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:47:23.808509 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:47:23.811246 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:47:23.853221 augenrules[1622]: No rules Apr 24 23:47:23.854032 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:47:23.855408 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 24 23:47:23.857277 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 24 23:47:23.867041 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:57666.service: Deactivated successfully. Apr 24 23:47:23.868300 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:47:23.869391 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:47:23.870422 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:57668.service - OpenSSH per-connection server daemon (10.0.0.1:57668). Apr 24 23:47:23.871210 systemd-logind[1438]: Removed session 6. Apr 24 23:47:23.910292 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 57668 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:47:23.916579 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:47:23.924244 systemd-logind[1438]: New session 7 of user core. Apr 24 23:47:23.940929 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:47:23.992470 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:47:23.992720 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:47:24.716033 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:47:24.716121 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:47:25.488698 dockerd[1651]: time="2026-04-24T23:47:25.488393797Z" level=info msg="Starting up" Apr 24 23:47:25.746759 dockerd[1651]: time="2026-04-24T23:47:25.746440483Z" level=info msg="Loading containers: start." Apr 24 23:47:25.884849 kernel: Initializing XFRM netlink socket Apr 24 23:47:26.037414 systemd-networkd[1391]: docker0: Link UP Apr 24 23:47:26.056345 dockerd[1651]: time="2026-04-24T23:47:26.056217953Z" level=info msg="Loading containers: done." Apr 24 23:47:26.086441 dockerd[1651]: time="2026-04-24T23:47:26.086153484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:47:26.086978 dockerd[1651]: time="2026-04-24T23:47:26.086858299Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:47:26.087088 dockerd[1651]: time="2026-04-24T23:47:26.087047965Z" level=info msg="Daemon has completed initialization" Apr 24 23:47:26.132283 dockerd[1651]: time="2026-04-24T23:47:26.132173773Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:47:26.132931 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:47:26.981506 containerd[1460]: time="2026-04-24T23:47:26.981400048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:47:27.731768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696111524.mount: Deactivated successfully. Apr 24 23:47:29.243526 containerd[1460]: time="2026-04-24T23:47:29.243271465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:29.243526 containerd[1460]: time="2026-04-24T23:47:29.243394295Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 24 23:47:29.245302 containerd[1460]: time="2026-04-24T23:47:29.244632689Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:29.247120 containerd[1460]: time="2026-04-24T23:47:29.247047879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:29.247987 containerd[1460]: time="2026-04-24T23:47:29.247956235Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.266483922s" Apr 24 23:47:29.248062 containerd[1460]: time="2026-04-24T23:47:29.247999390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:47:29.251542 containerd[1460]: time="2026-04-24T23:47:29.251378652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:47:30.170639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:47:30.184989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:30.588588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:30.592920 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:47:30.816204 kubelet[1869]: E0424 23:47:30.815711 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:47:30.821982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:47:30.822121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:47:30.897096 containerd[1460]: time="2026-04-24T23:47:30.896536713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:30.897096 containerd[1460]: time="2026-04-24T23:47:30.896908700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 24 23:47:30.917890 containerd[1460]: time="2026-04-24T23:47:30.917449191Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:30.920885 containerd[1460]: time="2026-04-24T23:47:30.920857558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:30.921829 containerd[1460]: time="2026-04-24T23:47:30.921724532Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.670231595s" Apr 24 23:47:30.921829 containerd[1460]: time="2026-04-24T23:47:30.921827428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:47:30.923365 containerd[1460]: time="2026-04-24T23:47:30.923338871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:47:32.382840 containerd[1460]: time="2026-04-24T23:47:32.382548256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:32.384634 containerd[1460]: time="2026-04-24T23:47:32.382953578Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 24 23:47:32.384634 containerd[1460]: time="2026-04-24T23:47:32.383765510Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:32.386043 containerd[1460]: time="2026-04-24T23:47:32.385953582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:32.386962 containerd[1460]: time="2026-04-24T23:47:32.386937925Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.463567894s" Apr 24 23:47:32.387001 containerd[1460]: time="2026-04-24T23:47:32.386970556Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:47:32.388377 containerd[1460]: time="2026-04-24T23:47:32.388354411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:47:33.600618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805344629.mount: Deactivated successfully. Apr 24 23:47:34.326377 containerd[1460]: time="2026-04-24T23:47:34.326163943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:34.329672 containerd[1460]: time="2026-04-24T23:47:34.326443485Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 24 23:47:34.329672 containerd[1460]: time="2026-04-24T23:47:34.328172909Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:34.330820 containerd[1460]: time="2026-04-24T23:47:34.330728082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:34.331240 containerd[1460]: time="2026-04-24T23:47:34.331185890Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.942801806s" Apr 24 23:47:34.331304 containerd[1460]: time="2026-04-24T23:47:34.331243766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:47:34.332758 containerd[1460]: time="2026-04-24T23:47:34.332727491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:47:34.835105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133644175.mount: Deactivated successfully. Apr 24 23:47:35.870615 containerd[1460]: time="2026-04-24T23:47:35.870420261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:35.870615 containerd[1460]: time="2026-04-24T23:47:35.870661530Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 24 23:47:35.872909 containerd[1460]: time="2026-04-24T23:47:35.871844476Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:35.880877 containerd[1460]: time="2026-04-24T23:47:35.880627474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:35.881918 containerd[1460]: time="2026-04-24T23:47:35.881747259Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.548977602s" Apr 24 23:47:35.881970 containerd[1460]: time="2026-04-24T23:47:35.881921521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:47:35.884093 containerd[1460]: time="2026-04-24T23:47:35.883652912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:47:36.392320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973506933.mount: Deactivated successfully. Apr 24 23:47:36.399543 containerd[1460]: time="2026-04-24T23:47:36.399298926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:36.400243 containerd[1460]: time="2026-04-24T23:47:36.399743637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 24 23:47:36.401105 containerd[1460]: time="2026-04-24T23:47:36.400988501Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:36.405107 containerd[1460]: time="2026-04-24T23:47:36.403347107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:36.415405 containerd[1460]: time="2026-04-24T23:47:36.415129086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.265472ms" Apr 24 23:47:36.415986 containerd[1460]: time="2026-04-24T23:47:36.415450695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:47:36.430567 containerd[1460]: time="2026-04-24T23:47:36.430279733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:47:36.938475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271379911.mount: Deactivated successfully. Apr 24 23:47:38.066067 containerd[1460]: time="2026-04-24T23:47:38.065904764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:38.067285 containerd[1460]: time="2026-04-24T23:47:38.066263551Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 24 23:47:38.067285 containerd[1460]: time="2026-04-24T23:47:38.067195266Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:38.070763 containerd[1460]: time="2026-04-24T23:47:38.070726553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:38.071665 containerd[1460]: time="2026-04-24T23:47:38.071627736Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.641192031s" Apr 24 23:47:38.071700 containerd[1460]: time="2026-04-24T23:47:38.071668772Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:47:40.866090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:47:40.878066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:40.886282 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:47:40.886344 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:47:40.886551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:40.889016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:40.919147 systemd[1]: Reloading requested from client PID 2044 ('systemctl') (unit session-7.scope)... Apr 24 23:47:40.919170 systemd[1]: Reloading... Apr 24 23:47:40.987864 zram_generator::config[2083]: No configuration found. Apr 24 23:47:41.077962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:47:41.137674 systemd[1]: Reloading finished in 218 ms. Apr 24 23:47:41.182375 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:41.184691 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:47:41.184931 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:41.186230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:41.313376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:41.317395 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:47:41.357312 kubelet[2133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:47:41.357312 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:47:41.357312 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:47:41.357644 kubelet[2133]: I0424 23:47:41.357418 2133 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:47:41.519434 kubelet[2133]: I0424 23:47:41.519330 2133 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:47:41.519434 kubelet[2133]: I0424 23:47:41.519408 2133 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:47:41.520183 kubelet[2133]: I0424 23:47:41.519881 2133 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:47:41.539652 kubelet[2133]: E0424 23:47:41.539597 2133 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:47:41.543354 kubelet[2133]: I0424 23:47:41.542724 2133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:47:41.551098 kubelet[2133]: E0424 23:47:41.551056 2133 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:47:41.551098 kubelet[2133]: I0424 23:47:41.551088 2133 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:47:41.554998 kubelet[2133]: I0424 23:47:41.554973 2133 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:47:41.555320 kubelet[2133]: I0424 23:47:41.555276 2133 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:47:41.555482 kubelet[2133]: I0424 23:47:41.555311 2133 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:47:41.555633 kubelet[2133]: I0424 23:47:41.555509 2133 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:47:41.555633 kubelet[2133]: I0424 23:47:41.555517 2133 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:47:41.555754 kubelet[2133]: I0424 23:47:41.555741 2133 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:47:41.558793 kubelet[2133]: I0424 23:47:41.558725 2133 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:47:41.558793 kubelet[2133]: I0424 23:47:41.558757 2133 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:47:41.558881 kubelet[2133]: I0424 23:47:41.558860 2133 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:47:41.560454 kubelet[2133]: I0424 23:47:41.560335 2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:47:41.562055 kubelet[2133]: E0424 23:47:41.562019 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:47:41.562055 kubelet[2133]: E0424 23:47:41.562022 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:47:41.562831 kubelet[2133]: I0424 23:47:41.562805 2133 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:47:41.563414 kubelet[2133]: I0424 23:47:41.563386 2133 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:47:41.564013 kubelet[2133]: W0424 23:47:41.563986 2133 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:47:41.568983 kubelet[2133]: I0424 23:47:41.568951 2133 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:47:41.569127 kubelet[2133]: I0424 23:47:41.569101 2133 server.go:1289] "Started kubelet" Apr 24 23:47:41.569609 kubelet[2133]: I0424 23:47:41.569579 2133 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:47:41.569838 kubelet[2133]: I0424 23:47:41.569756 2133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:47:41.570147 kubelet[2133]: I0424 23:47:41.570131 2133 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:47:41.571686 kubelet[2133]: I0424 23:47:41.571664 2133 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:47:41.572233 kubelet[2133]: I0424 23:47:41.572211 2133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:47:41.573583 kubelet[2133]: I0424 23:47:41.573489 2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:47:41.574440 kubelet[2133]: I0424 23:47:41.573803 2133 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:47:41.574440 kubelet[2133]: I0424 23:47:41.573949 2133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:47:41.574440 kubelet[2133]: I0424 23:47:41.574059 2133 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:47:41.574440 kubelet[2133]: E0424 23:47:41.574312 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:47:41.574440 kubelet[2133]: E0424 23:47:41.573460 2133 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a96fceb53cac12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:47:41.568969746 +0000 UTC m=+0.247761700,LastTimestamp:2026-04-24 23:47:41.568969746 +0000 UTC m=+0.247761700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:47:41.574440 kubelet[2133]: E0424 23:47:41.574414 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:41.574672 kubelet[2133]: E0424 23:47:41.574519 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Apr 24 23:47:41.575277 kubelet[2133]: I0424 23:47:41.575258 2133 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:47:41.575366 kubelet[2133]: I0424 23:47:41.575337 2133 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:47:41.576433 kubelet[2133]: E0424 23:47:41.576394 2133 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:47:41.576912 kubelet[2133]: I0424 23:47:41.576706 2133 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:47:41.586465 kubelet[2133]: I0424 23:47:41.586445 2133 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:47:41.586465 kubelet[2133]: I0424 23:47:41.586461 2133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:47:41.586538 kubelet[2133]: I0424 23:47:41.586473 2133 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:47:41.633848 kubelet[2133]: I0424 23:47:41.633638 2133 policy_none.go:49] "None policy: Start" Apr 24 23:47:41.633848 kubelet[2133]: I0424 23:47:41.633905 2133 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:47:41.634602 kubelet[2133]: I0424 23:47:41.634011 2133 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:47:41.640917 kubelet[2133]: I0424 23:47:41.640852 2133 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:47:41.642070 kubelet[2133]: I0424 23:47:41.642049 2133 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:47:41.642114 kubelet[2133]: I0424 23:47:41.642107 2133 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:47:41.642265 kubelet[2133]: I0424 23:47:41.642151 2133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:47:41.642438 kubelet[2133]: I0424 23:47:41.642409 2133 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:47:41.642539 kubelet[2133]: E0424 23:47:41.642505 2133 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:47:41.643505 kubelet[2133]: E0424 23:47:41.643327 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:47:41.644249 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:47:41.653283 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:47:41.655319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:47:41.673490 kubelet[2133]: E0424 23:47:41.673444 2133 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:47:41.673685 kubelet[2133]: I0424 23:47:41.673661 2133 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:47:41.673763 kubelet[2133]: I0424 23:47:41.673681 2133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:47:41.674438 kubelet[2133]: I0424 23:47:41.673960 2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:47:41.675006 kubelet[2133]: E0424 23:47:41.674974 2133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:47:41.675076 kubelet[2133]: E0424 23:47:41.675024 2133 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 23:47:41.763448 systemd[1]: Created slice kubepods-burstable-pod2cf993ee57dcfd5723500611fa6f26b0.slice - libcontainer container kubepods-burstable-pod2cf993ee57dcfd5723500611fa6f26b0.slice. Apr 24 23:47:41.773524 kubelet[2133]: E0424 23:47:41.773403 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:41.775462 kubelet[2133]: E0424 23:47:41.775289 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Apr 24 23:47:41.775634 kubelet[2133]: I0424 23:47:41.775580 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:47:41.776002 kubelet[2133]: E0424 23:47:41.775973 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 23:47:41.776023 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 24 23:47:41.777326 kubelet[2133]: E0424 23:47:41.777306 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:41.799929 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 24 23:47:41.801304 kubelet[2133]: E0424 23:47:41.801285 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:41.876189 kubelet[2133]: I0424 23:47:41.875909 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:41.876189 kubelet[2133]: I0424 23:47:41.876089 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:41.876189 kubelet[2133]: I0424 23:47:41.876110 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:41.876189 kubelet[2133]: I0424 23:47:41.876129 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:41.876189 kubelet[2133]: I0424 23:47:41.876143 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cf993ee57dcfd5723500611fa6f26b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cf993ee57dcfd5723500611fa6f26b0\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:41.877258 kubelet[2133]: I0424 23:47:41.876155 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cf993ee57dcfd5723500611fa6f26b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cf993ee57dcfd5723500611fa6f26b0\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:41.877258 kubelet[2133]: I0424 23:47:41.876215 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cf993ee57dcfd5723500611fa6f26b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2cf993ee57dcfd5723500611fa6f26b0\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:41.877258 kubelet[2133]: I0424 23:47:41.876269 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:41.877258 kubelet[2133]: I0424 23:47:41.876286 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:41.980698 kubelet[2133]: I0424 23:47:41.980543 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:47:41.981434 kubelet[2133]: E0424 23:47:41.981276 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 23:47:42.076351 kubelet[2133]: E0424 23:47:42.075634 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:42.077976 kubelet[2133]: E0424 23:47:42.077949 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:42.079143 containerd[1460]: time="2026-04-24T23:47:42.079047134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2cf993ee57dcfd5723500611fa6f26b0,Namespace:kube-system,Attempt:0,}" Apr 24 23:47:42.079993 containerd[1460]: time="2026-04-24T23:47:42.079072878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 24 23:47:42.102734 kubelet[2133]: E0424 23:47:42.102682 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:42.103387 containerd[1460]: time="2026-04-24T23:47:42.103323043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 24 23:47:42.176217 kubelet[2133]: E0424 23:47:42.176185 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Apr 24 23:47:42.384684 kubelet[2133]: I0424 23:47:42.384292 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:47:42.385610 kubelet[2133]: E0424 23:47:42.384790 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 23:47:42.518857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021212861.mount: Deactivated successfully. Apr 24 23:47:42.525492 containerd[1460]: time="2026-04-24T23:47:42.525431246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:47:42.526374 containerd[1460]: time="2026-04-24T23:47:42.526351328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:47:42.526584 containerd[1460]: time="2026-04-24T23:47:42.526540366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:47:42.527298 containerd[1460]: time="2026-04-24T23:47:42.527239599Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:47:42.527871 containerd[1460]: time="2026-04-24T23:47:42.527811434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 24 23:47:42.528371 containerd[1460]: time="2026-04-24T23:47:42.528340296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:47:42.528850 containerd[1460]: time="2026-04-24T23:47:42.528817457Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:47:42.531666 containerd[1460]: time="2026-04-24T23:47:42.531621367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:47:42.532753 containerd[1460]: time="2026-04-24T23:47:42.532716570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.434308ms" Apr 24 23:47:42.533235 containerd[1460]: time="2026-04-24T23:47:42.533203967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 429.771964ms" Apr 24 23:47:42.535687 containerd[1460]: time="2026-04-24T23:47:42.535621814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.398459ms" Apr 24 23:47:42.625064 containerd[1460]: time="2026-04-24T23:47:42.624855512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:47:42.626127 containerd[1460]: time="2026-04-24T23:47:42.625443976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:47:42.626127 containerd[1460]: time="2026-04-24T23:47:42.625535734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:42.626127 containerd[1460]: time="2026-04-24T23:47:42.625638975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:42.626439 containerd[1460]: time="2026-04-24T23:47:42.625858873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:47:42.626439 containerd[1460]: time="2026-04-24T23:47:42.625903519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:47:42.626439 containerd[1460]: time="2026-04-24T23:47:42.625935669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:42.626439 containerd[1460]: time="2026-04-24T23:47:42.626032089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:42.627024 containerd[1460]: time="2026-04-24T23:47:42.626943822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:47:42.627024 containerd[1460]: time="2026-04-24T23:47:42.626986664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:47:42.627024 containerd[1460]: time="2026-04-24T23:47:42.626998149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:42.627098 containerd[1460]: time="2026-04-24T23:47:42.627047206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:42.655977 systemd[1]: Started cri-containerd-041aaa96fc58b7ce99fa618a97b130f8c0fd1028011e59928a4cd3daaf5fad41.scope - libcontainer container 041aaa96fc58b7ce99fa618a97b130f8c0fd1028011e59928a4cd3daaf5fad41. Apr 24 23:47:42.656903 systemd[1]: Started cri-containerd-0cc748d3ed066cfe4157288f7037207806d598cc8eaed4e2b26d35b5231c9f9d.scope - libcontainer container 0cc748d3ed066cfe4157288f7037207806d598cc8eaed4e2b26d35b5231c9f9d. Apr 24 23:47:42.657701 systemd[1]: Started cri-containerd-d60a902d5f9ebe30898f6571b325bdd59426263a0a03a3ef33017118752a7a23.scope - libcontainer container d60a902d5f9ebe30898f6571b325bdd59426263a0a03a3ef33017118752a7a23. Apr 24 23:47:42.671434 kubelet[2133]: E0424 23:47:42.671280 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:47:42.700153 containerd[1460]: time="2026-04-24T23:47:42.699689156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"041aaa96fc58b7ce99fa618a97b130f8c0fd1028011e59928a4cd3daaf5fad41\"" Apr 24 23:47:42.700724 kubelet[2133]: E0424 23:47:42.700697 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:42.709115 containerd[1460]: time="2026-04-24T23:47:42.709030483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cc748d3ed066cfe4157288f7037207806d598cc8eaed4e2b26d35b5231c9f9d\"" Apr 24 23:47:42.710496 kubelet[2133]: E0424 23:47:42.710454 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:42.710808 containerd[1460]: time="2026-04-24T23:47:42.710748650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2cf993ee57dcfd5723500611fa6f26b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d60a902d5f9ebe30898f6571b325bdd59426263a0a03a3ef33017118752a7a23\"" Apr 24 23:47:42.711741 kubelet[2133]: E0424 23:47:42.711713 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:47:42.712993 kubelet[2133]: E0424 23:47:42.712949 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:42.713331 containerd[1460]: time="2026-04-24T23:47:42.713277772Z" level=info msg="CreateContainer within sandbox \"041aaa96fc58b7ce99fa618a97b130f8c0fd1028011e59928a4cd3daaf5fad41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:47:42.715114 containerd[1460]: time="2026-04-24T23:47:42.715091800Z" level=info msg="CreateContainer within sandbox \"0cc748d3ed066cfe4157288f7037207806d598cc8eaed4e2b26d35b5231c9f9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:47:42.717552 containerd[1460]: time="2026-04-24T23:47:42.717525742Z" level=info msg="CreateContainer within sandbox \"d60a902d5f9ebe30898f6571b325bdd59426263a0a03a3ef33017118752a7a23\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:47:42.728190 containerd[1460]: time="2026-04-24T23:47:42.728070300Z" level=info msg="CreateContainer within sandbox \"041aaa96fc58b7ce99fa618a97b130f8c0fd1028011e59928a4cd3daaf5fad41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a82f4061708af700e23d5b74b7852363bdcb09d49bb57cf512a2c78bc989fc69\"" Apr 24 23:47:42.728750 containerd[1460]: time="2026-04-24T23:47:42.728698768Z" level=info msg="StartContainer for \"a82f4061708af700e23d5b74b7852363bdcb09d49bb57cf512a2c78bc989fc69\"" Apr 24 23:47:42.734002 containerd[1460]: time="2026-04-24T23:47:42.733849887Z" level=info msg="CreateContainer within sandbox \"0cc748d3ed066cfe4157288f7037207806d598cc8eaed4e2b26d35b5231c9f9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d2ae395a13d77924bde8feeedabc0e8b628e7a7f252944984bc870ddfb640d5f\"" Apr 24 23:47:42.736432 containerd[1460]: time="2026-04-24T23:47:42.736404750Z" level=info msg="StartContainer for \"d2ae395a13d77924bde8feeedabc0e8b628e7a7f252944984bc870ddfb640d5f\"" Apr 24 23:47:42.739025 containerd[1460]: time="2026-04-24T23:47:42.738979113Z" level=info msg="CreateContainer within sandbox \"d60a902d5f9ebe30898f6571b325bdd59426263a0a03a3ef33017118752a7a23\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66cee47f718bdc3003c3c577a3440c9863e6d5eb6a627b932e9fee4d3d88a90c\"" Apr 24 23:47:42.740831 containerd[1460]: time="2026-04-24T23:47:42.739543396Z" level=info msg="StartContainer for \"66cee47f718bdc3003c3c577a3440c9863e6d5eb6a627b932e9fee4d3d88a90c\"" Apr 24 23:47:42.757978 systemd[1]: Started cri-containerd-a82f4061708af700e23d5b74b7852363bdcb09d49bb57cf512a2c78bc989fc69.scope - libcontainer container a82f4061708af700e23d5b74b7852363bdcb09d49bb57cf512a2c78bc989fc69. Apr 24 23:47:42.761641 systemd[1]: Started cri-containerd-66cee47f718bdc3003c3c577a3440c9863e6d5eb6a627b932e9fee4d3d88a90c.scope - libcontainer container 66cee47f718bdc3003c3c577a3440c9863e6d5eb6a627b932e9fee4d3d88a90c. Apr 24 23:47:42.762842 systemd[1]: Started cri-containerd-d2ae395a13d77924bde8feeedabc0e8b628e7a7f252944984bc870ddfb640d5f.scope - libcontainer container d2ae395a13d77924bde8feeedabc0e8b628e7a7f252944984bc870ddfb640d5f. Apr 24 23:47:42.807843 containerd[1460]: time="2026-04-24T23:47:42.807618078Z" level=info msg="StartContainer for \"d2ae395a13d77924bde8feeedabc0e8b628e7a7f252944984bc870ddfb640d5f\" returns successfully" Apr 24 23:47:42.809799 containerd[1460]: time="2026-04-24T23:47:42.807902571Z" level=info msg="StartContainer for \"66cee47f718bdc3003c3c577a3440c9863e6d5eb6a627b932e9fee4d3d88a90c\" returns successfully" Apr 24 23:47:42.809799 containerd[1460]: time="2026-04-24T23:47:42.807627477Z" level=info msg="StartContainer for \"a82f4061708af700e23d5b74b7852363bdcb09d49bb57cf512a2c78bc989fc69\" returns successfully" Apr 24 23:47:42.826661 kubelet[2133]: E0424 23:47:42.826631 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:47:42.875604 kubelet[2133]: E0424 23:47:42.875571 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:47:43.191498 kubelet[2133]: I0424 23:47:43.191218 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:47:43.659529 kubelet[2133]: E0424 23:47:43.659173 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:43.659529 kubelet[2133]: E0424 23:47:43.659397 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:43.661006 kubelet[2133]: E0424 23:47:43.660985 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:43.661103 kubelet[2133]: E0424 23:47:43.661088 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:43.662734 kubelet[2133]: E0424 23:47:43.662694 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:43.662886 kubelet[2133]: E0424 23:47:43.662819 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:43.713201 kubelet[2133]: E0424 23:47:43.713060 2133 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 23:47:43.804185 kubelet[2133]: I0424 23:47:43.803269 2133 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 23:47:43.805610 kubelet[2133]: E0424 23:47:43.804271 2133 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 24 23:47:43.831802 kubelet[2133]: E0424 23:47:43.830641 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:43.931709 kubelet[2133]: E0424 23:47:43.931447 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.032356 kubelet[2133]: E0424 23:47:44.032001 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.133862 kubelet[2133]: E0424 23:47:44.133570 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.235682 kubelet[2133]: E0424 23:47:44.234655 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.336045 kubelet[2133]: E0424 23:47:44.335844 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.437264 kubelet[2133]: E0424 23:47:44.437000 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.539018 kubelet[2133]: E0424 23:47:44.538181 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.639259 kubelet[2133]: E0424 23:47:44.638760 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.668443 kubelet[2133]: E0424 23:47:44.668411 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:44.669195 kubelet[2133]: E0424 23:47:44.668831 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:44.669195 kubelet[2133]: E0424 23:47:44.668411 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:44.669195 kubelet[2133]: E0424 23:47:44.669021 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:44.740764 kubelet[2133]: E0424 23:47:44.740453 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.842713 kubelet[2133]: E0424 23:47:44.841906 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:44.943724 kubelet[2133]: E0424 23:47:44.943316 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.044223 kubelet[2133]: E0424 23:47:45.044009 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.047011 kubelet[2133]: E0424 23:47:45.046985 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:47:45.047366 kubelet[2133]: E0424 23:47:45.047338 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:45.145228 kubelet[2133]: E0424 23:47:45.144639 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.246428 kubelet[2133]: E0424 23:47:45.245590 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.347450 kubelet[2133]: E0424 23:47:45.347216 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.448497 kubelet[2133]: E0424 23:47:45.448375 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.550119 kubelet[2133]: E0424 23:47:45.549676 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.651122 kubelet[2133]: E0424 23:47:45.650666 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:45.674743 kubelet[2133]: I0424 23:47:45.674711 2133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:45.685517 kubelet[2133]: I0424 23:47:45.685475 2133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:45.692088 kubelet[2133]: I0424 23:47:45.691825 2133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:45.769812 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Apr 24 23:47:45.769828 systemd[1]: Reloading... Apr 24 23:47:45.836145 zram_generator::config[2460]: No configuration found. Apr 24 23:47:45.912904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:47:45.968267 systemd[1]: Reloading finished in 198 ms. Apr 24 23:47:46.002294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:46.025971 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:47:46.026214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:46.034106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:47:46.151031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:47:46.154472 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:47:46.201720 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:47:46.201720 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:47:46.201720 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:47:46.201720 kubelet[2505]: I0424 23:47:46.201717 2505 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:47:46.216416 kubelet[2505]: I0424 23:47:46.216216 2505 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:47:46.216416 kubelet[2505]: I0424 23:47:46.216284 2505 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:47:46.217116 kubelet[2505]: I0424 23:47:46.216619 2505 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:47:46.218158 kubelet[2505]: I0424 23:47:46.218131 2505 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:47:46.220132 kubelet[2505]: I0424 23:47:46.220098 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:47:46.225216 kubelet[2505]: E0424 23:47:46.225189 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:47:46.225216 kubelet[2505]: I0424 23:47:46.225214 2505 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:47:46.229003 kubelet[2505]: I0424 23:47:46.228976 2505 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:47:46.229170 kubelet[2505]: I0424 23:47:46.229130 2505 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:47:46.229293 kubelet[2505]: I0424 23:47:46.229163 2505 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:47:46.229293 kubelet[2505]: I0424 23:47:46.229292 2505 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:47:46.229539 kubelet[2505]: I0424 23:47:46.229300 2505 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:47:46.229539 kubelet[2505]: I0424 23:47:46.229352 2505 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:47:46.229539 kubelet[2505]: I0424 23:47:46.229502 2505 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:47:46.229539 kubelet[2505]: I0424 23:47:46.229533 2505 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:47:46.229597 kubelet[2505]: I0424 23:47:46.229554 2505 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:47:46.229597 kubelet[2505]: I0424 23:47:46.229569 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:47:46.232102 kubelet[2505]: I0424 23:47:46.232071 2505 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:47:46.232496 kubelet[2505]: I0424 23:47:46.232469 2505 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:47:46.235209 kubelet[2505]: I0424 23:47:46.235184 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:47:46.235248 kubelet[2505]: I0424 23:47:46.235226 2505 server.go:1289] "Started kubelet" Apr 24 23:47:46.236449 kubelet[2505]: I0424 23:47:46.236410 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:47:46.238069 kubelet[2505]: I0424 23:47:46.238001 2505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:47:46.239517 kubelet[2505]: I0424 23:47:46.239462 2505 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:47:46.243850 kubelet[2505]: I0424 23:47:46.243815 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:47:46.245689 kubelet[2505]: I0424 23:47:46.239606 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:47:46.246536 kubelet[2505]: I0424 23:47:46.246066 2505 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:47:46.246536 kubelet[2505]: I0424 23:47:46.246324 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:47:46.246816 kubelet[2505]: E0424 23:47:46.246798 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:47:46.249171 kubelet[2505]: I0424 23:47:46.249132 2505 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:47:46.249846 kubelet[2505]: I0424 23:47:46.248977 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:47:46.251559 kubelet[2505]: I0424 23:47:46.250839 2505 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:47:46.251914 kubelet[2505]: E0424 23:47:46.251757 2505 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:47:46.252463 kubelet[2505]: I0424 23:47:46.251762 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:47:46.255908 kubelet[2505]: I0424 23:47:46.253506 2505 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:47:46.258588 kubelet[2505]: I0424 23:47:46.258517 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:47:46.259667 kubelet[2505]: I0424 23:47:46.259631 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:47:46.259716 kubelet[2505]: I0424 23:47:46.259676 2505 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:47:46.259732 kubelet[2505]: I0424 23:47:46.259721 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:47:46.259748 kubelet[2505]: I0424 23:47:46.259736 2505 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:47:46.259817 kubelet[2505]: E0424 23:47:46.259800 2505 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:47:46.283068 kubelet[2505]: I0424 23:47:46.282979 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:47:46.283068 kubelet[2505]: I0424 23:47:46.282995 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:47:46.283068 kubelet[2505]: I0424 23:47:46.283009 2505 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:47:46.283194 kubelet[2505]: I0424 23:47:46.283178 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:47:46.283213 kubelet[2505]: I0424 23:47:46.283193 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:47:46.283213 kubelet[2505]: I0424 23:47:46.283207 2505 policy_none.go:49] "None policy: Start" Apr 24 23:47:46.283241 kubelet[2505]: I0424 23:47:46.283214 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:47:46.283241 kubelet[2505]: I0424 23:47:46.283222 2505 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:47:46.283323 kubelet[2505]: I0424 23:47:46.283307 2505 state_mem.go:75] "Updated machine memory state" Apr 24 23:47:46.290542 kubelet[2505]: E0424 23:47:46.290510 2505 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:47:46.290654 kubelet[2505]: I0424 23:47:46.290640 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:47:46.290676 kubelet[2505]: I0424 23:47:46.290657 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:47:46.290890 kubelet[2505]: I0424 23:47:46.290846 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:47:46.291664 kubelet[2505]: E0424 23:47:46.291567 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:47:46.363898 kubelet[2505]: I0424 23:47:46.361879 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.363898 kubelet[2505]: I0424 23:47:46.362112 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:46.363898 kubelet[2505]: I0424 23:47:46.361899 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:46.370241 kubelet[2505]: E0424 23:47:46.370187 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.370241 kubelet[2505]: E0424 23:47:46.370244 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:46.370476 kubelet[2505]: E0424 23:47:46.370176 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:46.399068 kubelet[2505]: I0424 23:47:46.399041 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:47:46.410193 kubelet[2505]: I0424 23:47:46.409842 2505 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 24 23:47:46.411126 kubelet[2505]: I0424 23:47:46.410383 2505 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 23:47:46.552850 kubelet[2505]: I0424 23:47:46.552305 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cf993ee57dcfd5723500611fa6f26b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cf993ee57dcfd5723500611fa6f26b0\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:46.552850 kubelet[2505]: I0424 23:47:46.552477 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.552850 kubelet[2505]: I0424 23:47:46.552506 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.552850 kubelet[2505]: I0424 23:47:46.552525 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.552850 kubelet[2505]: I0424 23:47:46.552539 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.553683 kubelet[2505]: I0424 23:47:46.552577 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cf993ee57dcfd5723500611fa6f26b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2cf993ee57dcfd5723500611fa6f26b0\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:46.553683 kubelet[2505]: I0424 23:47:46.552589 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:47:46.553683 kubelet[2505]: I0424 23:47:46.552601 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:46.553683 kubelet[2505]: I0424 23:47:46.552618 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cf993ee57dcfd5723500611fa6f26b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2cf993ee57dcfd5723500611fa6f26b0\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:46.672162 kubelet[2505]: E0424 23:47:46.671995 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:46.672848 kubelet[2505]: E0424 23:47:46.672717 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:46.673074 kubelet[2505]: E0424 23:47:46.673056 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:47.231034 kubelet[2505]: I0424 23:47:47.230826 2505 apiserver.go:52] "Watching apiserver" Apr 24 23:47:47.250110 kubelet[2505]: I0424 23:47:47.250045 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:47:47.275040 kubelet[2505]: I0424 23:47:47.273616 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:47.275040 kubelet[2505]: I0424 23:47:47.273766 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:47.275040 kubelet[2505]: E0424 23:47:47.274236 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:47.282459 kubelet[2505]: E0424 23:47:47.282335 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 23:47:47.282459 kubelet[2505]: E0424 23:47:47.282461 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:47.284314 kubelet[2505]: E0424 23:47:47.283402 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 23:47:47.284314 kubelet[2505]: E0424 23:47:47.283512 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:47.321505 kubelet[2505]: I0424 23:47:47.321146 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.321005116 podStartE2EDuration="2.321005116s" podCreationTimestamp="2026-04-24 23:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:47:47.310025942 +0000 UTC m=+1.150953042" watchObservedRunningTime="2026-04-24 23:47:47.321005116 +0000 UTC m=+1.161932197" Apr 24 23:47:47.484790 kubelet[2505]: I0424 23:47:47.484319 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.484268759 podStartE2EDuration="2.484268759s" podCreationTimestamp="2026-04-24 23:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:47:47.470190118 +0000 UTC m=+1.311117215" watchObservedRunningTime="2026-04-24 23:47:47.484268759 +0000 UTC m=+1.325195854" Apr 24 23:47:47.500866 kubelet[2505]: I0424 23:47:47.500727 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.500693059 podStartE2EDuration="2.500693059s" podCreationTimestamp="2026-04-24 23:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:47:47.48517191 +0000 UTC m=+1.326098991" watchObservedRunningTime="2026-04-24 23:47:47.500693059 +0000 UTC m=+1.341620140" Apr 24 23:47:48.275478 kubelet[2505]: E0424 23:47:48.275253 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:48.275478 kubelet[2505]: E0424 23:47:48.275379 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:48.275478 kubelet[2505]: E0424 23:47:48.275383 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:51.619079 kubelet[2505]: I0424 23:47:51.618789 2505 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:47:51.620639 kubelet[2505]: I0424 23:47:51.619850 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:47:51.620666 containerd[1460]: time="2026-04-24T23:47:51.619584697Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:47:52.547544 systemd[1]: Created slice kubepods-besteffort-pod2e377684_0f4a_4bf7_97a4_5dc72ba9220a.slice - libcontainer container kubepods-besteffort-pod2e377684_0f4a_4bf7_97a4_5dc72ba9220a.slice. Apr 24 23:47:52.641952 systemd[1]: Created slice kubepods-besteffort-pod9147373b_1e15_41dd_96a7_0e916f85c355.slice - libcontainer container kubepods-besteffort-pod9147373b_1e15_41dd_96a7_0e916f85c355.slice. Apr 24 23:47:52.650798 kubelet[2505]: I0424 23:47:52.650687 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e377684-0f4a-4bf7-97a4-5dc72ba9220a-kube-proxy\") pod \"kube-proxy-hrzm8\" (UID: \"2e377684-0f4a-4bf7-97a4-5dc72ba9220a\") " pod="kube-system/kube-proxy-hrzm8" Apr 24 23:47:52.650798 kubelet[2505]: I0424 23:47:52.650758 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e377684-0f4a-4bf7-97a4-5dc72ba9220a-xtables-lock\") pod \"kube-proxy-hrzm8\" (UID: \"2e377684-0f4a-4bf7-97a4-5dc72ba9220a\") " pod="kube-system/kube-proxy-hrzm8" Apr 24 23:47:52.650798 kubelet[2505]: I0424 23:47:52.650791 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e377684-0f4a-4bf7-97a4-5dc72ba9220a-lib-modules\") pod \"kube-proxy-hrzm8\" (UID: \"2e377684-0f4a-4bf7-97a4-5dc72ba9220a\") " pod="kube-system/kube-proxy-hrzm8" Apr 24 23:47:52.650798 kubelet[2505]: I0424 23:47:52.650806 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjx9\" (UniqueName: \"kubernetes.io/projected/2e377684-0f4a-4bf7-97a4-5dc72ba9220a-kube-api-access-nmjx9\") pod \"kube-proxy-hrzm8\" (UID: \"2e377684-0f4a-4bf7-97a4-5dc72ba9220a\") " pod="kube-system/kube-proxy-hrzm8" Apr 24 23:47:52.751660 kubelet[2505]: I0424 23:47:52.751277 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9147373b-1e15-41dd-96a7-0e916f85c355-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-7r48c\" (UID: \"9147373b-1e15-41dd-96a7-0e916f85c355\") " pod="tigera-operator/tigera-operator-6bf85f8dd-7r48c" Apr 24 23:47:52.751660 kubelet[2505]: I0424 23:47:52.751480 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4r7\" (UniqueName: \"kubernetes.io/projected/9147373b-1e15-41dd-96a7-0e916f85c355-kube-api-access-lg4r7\") pod \"tigera-operator-6bf85f8dd-7r48c\" (UID: \"9147373b-1e15-41dd-96a7-0e916f85c355\") " pod="tigera-operator/tigera-operator-6bf85f8dd-7r48c" Apr 24 23:47:52.860819 kubelet[2505]: E0424 23:47:52.860077 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:52.861645 containerd[1460]: time="2026-04-24T23:47:52.861370963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrzm8,Uid:2e377684-0f4a-4bf7-97a4-5dc72ba9220a,Namespace:kube-system,Attempt:0,}" Apr 24 23:47:52.899162 containerd[1460]: time="2026-04-24T23:47:52.899048049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:47:52.899162 containerd[1460]: time="2026-04-24T23:47:52.899096177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:47:52.899162 containerd[1460]: time="2026-04-24T23:47:52.899104584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:52.899588 containerd[1460]: time="2026-04-24T23:47:52.899168560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:52.934955 systemd[1]: Started cri-containerd-595da3c4e23fad2a3204e288952b326dac77219c6eaa0529f0c8eda6bc5222c1.scope - libcontainer container 595da3c4e23fad2a3204e288952b326dac77219c6eaa0529f0c8eda6bc5222c1. Apr 24 23:47:52.951275 containerd[1460]: time="2026-04-24T23:47:52.951073667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-7r48c,Uid:9147373b-1e15-41dd-96a7-0e916f85c355,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:47:52.975755 containerd[1460]: time="2026-04-24T23:47:52.975701066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrzm8,Uid:2e377684-0f4a-4bf7-97a4-5dc72ba9220a,Namespace:kube-system,Attempt:0,} returns sandbox id \"595da3c4e23fad2a3204e288952b326dac77219c6eaa0529f0c8eda6bc5222c1\"" Apr 24 23:47:52.977722 kubelet[2505]: E0424 23:47:52.977700 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:52.979835 containerd[1460]: time="2026-04-24T23:47:52.979419372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:47:52.979835 containerd[1460]: time="2026-04-24T23:47:52.979492922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:47:52.979835 containerd[1460]: time="2026-04-24T23:47:52.979505035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:52.979835 containerd[1460]: time="2026-04-24T23:47:52.979599406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:47:52.984342 containerd[1460]: time="2026-04-24T23:47:52.984059072Z" level=info msg="CreateContainer within sandbox \"595da3c4e23fad2a3204e288952b326dac77219c6eaa0529f0c8eda6bc5222c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:47:53.010935 containerd[1460]: time="2026-04-24T23:47:53.010734031Z" level=info msg="CreateContainer within sandbox \"595da3c4e23fad2a3204e288952b326dac77219c6eaa0529f0c8eda6bc5222c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c351452946c7b69341e79bcf99dc7a9dc468dfb149bc510ba70348d231846796\"" Apr 24 23:47:53.012558 containerd[1460]: time="2026-04-24T23:47:53.012483739Z" level=info msg="StartContainer for \"c351452946c7b69341e79bcf99dc7a9dc468dfb149bc510ba70348d231846796\"" Apr 24 23:47:53.019623 systemd[1]: Started cri-containerd-257f0e2eccca066e78674e6fe1b8c2928071a1ad03654ec09a2533c22476f410.scope - libcontainer container 257f0e2eccca066e78674e6fe1b8c2928071a1ad03654ec09a2533c22476f410. Apr 24 23:47:53.050917 systemd[1]: Started cri-containerd-c351452946c7b69341e79bcf99dc7a9dc468dfb149bc510ba70348d231846796.scope - libcontainer container c351452946c7b69341e79bcf99dc7a9dc468dfb149bc510ba70348d231846796. Apr 24 23:47:53.065234 containerd[1460]: time="2026-04-24T23:47:53.065196552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-7r48c,Uid:9147373b-1e15-41dd-96a7-0e916f85c355,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"257f0e2eccca066e78674e6fe1b8c2928071a1ad03654ec09a2533c22476f410\"" Apr 24 23:47:53.068161 containerd[1460]: time="2026-04-24T23:47:53.067917595Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:47:53.081067 containerd[1460]: time="2026-04-24T23:47:53.080971462Z" level=info msg="StartContainer for \"c351452946c7b69341e79bcf99dc7a9dc468dfb149bc510ba70348d231846796\" returns successfully" Apr 24 23:47:53.295933 kubelet[2505]: E0424 23:47:53.295666 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:54.805299 kubelet[2505]: E0424 23:47:54.805126 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:54.824351 kubelet[2505]: I0424 23:47:54.823967 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrzm8" podStartSLOduration=2.823918509 podStartE2EDuration="2.823918509s" podCreationTimestamp="2026-04-24 23:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:47:53.308810857 +0000 UTC m=+7.149737936" watchObservedRunningTime="2026-04-24 23:47:54.823918509 +0000 UTC m=+8.664845589" Apr 24 23:47:55.008528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871460921.mount: Deactivated successfully. Apr 24 23:47:55.298818 kubelet[2505]: E0424 23:47:55.298620 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:56.263276 containerd[1460]: time="2026-04-24T23:47:56.263085516Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:56.265576 containerd[1460]: time="2026-04-24T23:47:56.263470340Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:47:56.265576 containerd[1460]: time="2026-04-24T23:47:56.264678930Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:56.266338 containerd[1460]: time="2026-04-24T23:47:56.266307424Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:47:56.267151 containerd[1460]: time="2026-04-24T23:47:56.267114401Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.199163534s" Apr 24 23:47:56.267212 containerd[1460]: time="2026-04-24T23:47:56.267150531Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:47:56.271434 containerd[1460]: time="2026-04-24T23:47:56.271395535Z" level=info msg="CreateContainer within sandbox \"257f0e2eccca066e78674e6fe1b8c2928071a1ad03654ec09a2533c22476f410\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:47:56.290477 containerd[1460]: time="2026-04-24T23:47:56.290412874Z" level=info msg="CreateContainer within sandbox \"257f0e2eccca066e78674e6fe1b8c2928071a1ad03654ec09a2533c22476f410\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b53a70bdf43e929eb30c4063a79a9568a0e8dc5410a18c2a1356766f3626e7ed\"" Apr 24 23:47:56.291456 containerd[1460]: time="2026-04-24T23:47:56.291431207Z" level=info msg="StartContainer for \"b53a70bdf43e929eb30c4063a79a9568a0e8dc5410a18c2a1356766f3626e7ed\"" Apr 24 23:47:56.302173 kubelet[2505]: E0424 23:47:56.301738 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:56.338956 systemd[1]: Started cri-containerd-b53a70bdf43e929eb30c4063a79a9568a0e8dc5410a18c2a1356766f3626e7ed.scope - libcontainer container b53a70bdf43e929eb30c4063a79a9568a0e8dc5410a18c2a1356766f3626e7ed. Apr 24 23:47:56.374295 containerd[1460]: time="2026-04-24T23:47:56.374013336Z" level=info msg="StartContainer for \"b53a70bdf43e929eb30c4063a79a9568a0e8dc5410a18c2a1356766f3626e7ed\" returns successfully" Apr 24 23:47:56.661501 kubelet[2505]: E0424 23:47:56.660641 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:57.293509 kubelet[2505]: E0424 23:47:57.293324 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:57.313090 kubelet[2505]: E0424 23:47:57.312753 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:47:57.322269 kubelet[2505]: I0424 23:47:57.321826 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-7r48c" podStartSLOduration=2.121365843 podStartE2EDuration="5.321811982s" podCreationTimestamp="2026-04-24 23:47:52 +0000 UTC" firstStartedPulling="2026-04-24 23:47:53.067449616 +0000 UTC m=+6.908376696" lastFinishedPulling="2026-04-24 23:47:56.267895756 +0000 UTC m=+10.108822835" observedRunningTime="2026-04-24 23:47:57.321805035 +0000 UTC m=+11.162732114" watchObservedRunningTime="2026-04-24 23:47:57.321811982 +0000 UTC m=+11.162739080" Apr 24 23:48:01.517609 sudo[1633]: pam_unix(sudo:session): session closed for user root Apr 24 23:48:01.522266 sshd[1630]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:01.535939 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:57668.service: Deactivated successfully. Apr 24 23:48:01.540187 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:48:01.540442 systemd[1]: session-7.scope: Consumed 6.108s CPU time, 163.0M memory peak, 0B memory swap peak. Apr 24 23:48:01.541130 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:48:01.542513 systemd-logind[1438]: Removed session 7. Apr 24 23:48:02.039695 update_engine[1445]: I20260424 23:48:02.036053 1445 update_attempter.cc:509] Updating boot flags... Apr 24 23:48:02.155181 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2921) Apr 24 23:48:02.216865 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2923) Apr 24 23:48:03.449011 kubelet[2505]: I0424 23:48:03.447845 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9305ec80-f541-43f2-bef9-ff77f632d82f-typha-certs\") pod \"calico-typha-69494f8c7c-thqmq\" (UID: \"9305ec80-f541-43f2-bef9-ff77f632d82f\") " pod="calico-system/calico-typha-69494f8c7c-thqmq" Apr 24 23:48:03.449011 kubelet[2505]: I0424 23:48:03.448101 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9305ec80-f541-43f2-bef9-ff77f632d82f-tigera-ca-bundle\") pod \"calico-typha-69494f8c7c-thqmq\" (UID: \"9305ec80-f541-43f2-bef9-ff77f632d82f\") " pod="calico-system/calico-typha-69494f8c7c-thqmq" Apr 24 23:48:03.449011 kubelet[2505]: I0424 23:48:03.448159 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkt7\" (UniqueName: \"kubernetes.io/projected/9305ec80-f541-43f2-bef9-ff77f632d82f-kube-api-access-cfkt7\") pod \"calico-typha-69494f8c7c-thqmq\" (UID: \"9305ec80-f541-43f2-bef9-ff77f632d82f\") " pod="calico-system/calico-typha-69494f8c7c-thqmq" Apr 24 23:48:03.455049 systemd[1]: Created slice kubepods-besteffort-pod9305ec80_f541_43f2_bef9_ff77f632d82f.slice - libcontainer container kubepods-besteffort-pod9305ec80_f541_43f2_bef9_ff77f632d82f.slice. Apr 24 23:48:03.525762 systemd[1]: Created slice kubepods-besteffort-pod1d1719a2_0fd9_4b02_afe8_a27d0ab49e09.slice - libcontainer container kubepods-besteffort-pod1d1719a2_0fd9_4b02_afe8_a27d0ab49e09.slice. Apr 24 23:48:03.548858 kubelet[2505]: I0424 23:48:03.548418 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-cni-bin-dir\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.548858 kubelet[2505]: I0424 23:48:03.548668 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-nodeproc\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.548858 kubelet[2505]: I0424 23:48:03.548817 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-xtables-lock\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.548858 kubelet[2505]: I0424 23:48:03.548857 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k4nc\" (UniqueName: \"kubernetes.io/projected/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-kube-api-access-9k4nc\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.548858 kubelet[2505]: I0424 23:48:03.548872 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-sys-fs\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549862 kubelet[2505]: I0424 23:48:03.548883 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-var-run-calico\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549862 kubelet[2505]: I0424 23:48:03.548942 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-cni-net-dir\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549862 kubelet[2505]: I0424 23:48:03.548955 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-flexvol-driver-host\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549862 kubelet[2505]: I0424 23:48:03.548995 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-node-certs\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549862 kubelet[2505]: I0424 23:48:03.549008 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-policysync\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549959 kubelet[2505]: I0424 23:48:03.549758 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-cni-log-dir\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549959 kubelet[2505]: I0424 23:48:03.549883 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-bpffs\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549959 kubelet[2505]: I0424 23:48:03.549904 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-var-lib-calico\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549959 kubelet[2505]: I0424 23:48:03.549927 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-lib-modules\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.549959 kubelet[2505]: I0424 23:48:03.549940 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1719a2-0fd9-4b02-afe8-a27d0ab49e09-tigera-ca-bundle\") pod \"calico-node-8dlbs\" (UID: \"1d1719a2-0fd9-4b02-afe8-a27d0ab49e09\") " pod="calico-system/calico-node-8dlbs" Apr 24 23:48:03.650762 kubelet[2505]: I0424 23:48:03.650648 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8b177f61-af99-4e0d-af51-f699d327434d-socket-dir\") pod \"csi-node-driver-zpms9\" (UID: \"8b177f61-af99-4e0d-af51-f699d327434d\") " pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:03.656309 kubelet[2505]: E0424 23:48:03.654260 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:03.656419 kubelet[2505]: E0424 23:48:03.656370 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.656419 kubelet[2505]: W0424 23:48:03.656395 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.656458 kubelet[2505]: E0424 23:48:03.656424 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.661964 kubelet[2505]: E0424 23:48:03.657634 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.661964 kubelet[2505]: W0424 23:48:03.657666 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.661964 kubelet[2505]: E0424 23:48:03.657683 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.661964 kubelet[2505]: E0424 23:48:03.659527 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.661964 kubelet[2505]: W0424 23:48:03.659537 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.661964 kubelet[2505]: E0424 23:48:03.659551 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.661964 kubelet[2505]: I0424 23:48:03.659603 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54h5n\" (UniqueName: \"kubernetes.io/projected/8b177f61-af99-4e0d-af51-f699d327434d-kube-api-access-54h5n\") pod \"csi-node-driver-zpms9\" (UID: \"8b177f61-af99-4e0d-af51-f699d327434d\") " pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:03.664749 kubelet[2505]: E0424 23:48:03.664615 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.664749 kubelet[2505]: W0424 23:48:03.664744 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.664854 kubelet[2505]: E0424 23:48:03.664798 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.665164 kubelet[2505]: E0424 23:48:03.665144 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.665164 kubelet[2505]: W0424 23:48:03.665161 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.665851 kubelet[2505]: E0424 23:48:03.665172 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.666952 kubelet[2505]: E0424 23:48:03.666902 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.666952 kubelet[2505]: W0424 23:48:03.666952 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.667064 kubelet[2505]: E0424 23:48:03.667006 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.669797 kubelet[2505]: E0424 23:48:03.668159 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.669797 kubelet[2505]: W0424 23:48:03.668199 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.669797 kubelet[2505]: E0424 23:48:03.668214 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.669797 kubelet[2505]: E0424 23:48:03.668457 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.669797 kubelet[2505]: W0424 23:48:03.668465 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.669797 kubelet[2505]: E0424 23:48:03.668472 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.669797 kubelet[2505]: E0424 23:48:03.668864 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.669797 kubelet[2505]: W0424 23:48:03.668872 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.669797 kubelet[2505]: E0424 23:48:03.668881 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.674996 kubelet[2505]: E0424 23:48:03.674967 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.674996 kubelet[2505]: W0424 23:48:03.674989 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.675085 kubelet[2505]: E0424 23:48:03.675003 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.675873 kubelet[2505]: E0424 23:48:03.675671 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.675873 kubelet[2505]: W0424 23:48:03.675686 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.675873 kubelet[2505]: E0424 23:48:03.675696 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.676616 kubelet[2505]: E0424 23:48:03.676530 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.676616 kubelet[2505]: W0424 23:48:03.676552 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.676616 kubelet[2505]: E0424 23:48:03.676562 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.677311 kubelet[2505]: E0424 23:48:03.677281 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.677311 kubelet[2505]: W0424 23:48:03.677311 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.677381 kubelet[2505]: E0424 23:48:03.677325 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.677611 kubelet[2505]: E0424 23:48:03.677548 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.677611 kubelet[2505]: W0424 23:48:03.677561 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.677611 kubelet[2505]: E0424 23:48:03.677572 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.679958 kubelet[2505]: E0424 23:48:03.679946 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.680033 kubelet[2505]: W0424 23:48:03.680013 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.680033 kubelet[2505]: E0424 23:48:03.680025 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.680423 kubelet[2505]: E0424 23:48:03.680329 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.680423 kubelet[2505]: W0424 23:48:03.680337 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.680423 kubelet[2505]: E0424 23:48:03.680345 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.681079 kubelet[2505]: E0424 23:48:03.681069 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.681923 kubelet[2505]: W0424 23:48:03.681817 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.681923 kubelet[2505]: E0424 23:48:03.681836 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.687035 kubelet[2505]: E0424 23:48:03.686995 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.687215 kubelet[2505]: W0424 23:48:03.687102 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.687215 kubelet[2505]: E0424 23:48:03.687117 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.687428 kubelet[2505]: E0424 23:48:03.687420 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.687505 kubelet[2505]: W0424 23:48:03.687497 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.687542 kubelet[2505]: E0424 23:48:03.687536 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.688148 kubelet[2505]: E0424 23:48:03.687877 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.688148 kubelet[2505]: W0424 23:48:03.687892 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.688148 kubelet[2505]: E0424 23:48:03.687903 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.688397 kubelet[2505]: E0424 23:48:03.688372 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.688397 kubelet[2505]: W0424 23:48:03.688391 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.688611 kubelet[2505]: E0424 23:48:03.688400 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.688611 kubelet[2505]: E0424 23:48:03.688595 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.688611 kubelet[2505]: W0424 23:48:03.688605 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.688661 kubelet[2505]: E0424 23:48:03.688617 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.688661 kubelet[2505]: I0424 23:48:03.688637 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b177f61-af99-4e0d-af51-f699d327434d-kubelet-dir\") pod \"csi-node-driver-zpms9\" (UID: \"8b177f61-af99-4e0d-af51-f699d327434d\") " pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:03.692088 kubelet[2505]: E0424 23:48:03.692061 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.692088 kubelet[2505]: W0424 23:48:03.692082 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.692088 kubelet[2505]: E0424 23:48:03.692093 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.692205 kubelet[2505]: I0424 23:48:03.692113 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8b177f61-af99-4e0d-af51-f699d327434d-varrun\") pod \"csi-node-driver-zpms9\" (UID: \"8b177f61-af99-4e0d-af51-f699d327434d\") " pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:03.692355 kubelet[2505]: E0424 23:48:03.692330 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.692355 kubelet[2505]: W0424 23:48:03.692347 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.692400 kubelet[2505]: E0424 23:48:03.692356 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.692545 kubelet[2505]: E0424 23:48:03.692527 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.692545 kubelet[2505]: W0424 23:48:03.692544 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.692584 kubelet[2505]: E0424 23:48:03.692552 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.692720 kubelet[2505]: E0424 23:48:03.692703 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.692720 kubelet[2505]: W0424 23:48:03.692716 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.692793 kubelet[2505]: E0424 23:48:03.692722 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.692985 kubelet[2505]: E0424 23:48:03.692915 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.692985 kubelet[2505]: W0424 23:48:03.692930 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.692985 kubelet[2505]: E0424 23:48:03.692937 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.693094 kubelet[2505]: E0424 23:48:03.693075 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.693094 kubelet[2505]: W0424 23:48:03.693089 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.693148 kubelet[2505]: E0424 23:48:03.693095 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.693282 kubelet[2505]: E0424 23:48:03.693251 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.693282 kubelet[2505]: W0424 23:48:03.693267 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.693282 kubelet[2505]: E0424 23:48:03.693274 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.693847 kubelet[2505]: E0424 23:48:03.693412 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.693847 kubelet[2505]: W0424 23:48:03.693417 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.693847 kubelet[2505]: E0424 23:48:03.693422 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.693847 kubelet[2505]: E0424 23:48:03.693551 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.693847 kubelet[2505]: W0424 23:48:03.693558 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.693847 kubelet[2505]: E0424 23:48:03.693567 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.694871 kubelet[2505]: E0424 23:48:03.694849 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.694871 kubelet[2505]: W0424 23:48:03.694866 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.694934 kubelet[2505]: E0424 23:48:03.694874 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.695145 kubelet[2505]: E0424 23:48:03.695115 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.695145 kubelet[2505]: W0424 23:48:03.695141 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.695189 kubelet[2505]: E0424 23:48:03.695149 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.695890 kubelet[2505]: E0424 23:48:03.695863 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.695890 kubelet[2505]: W0424 23:48:03.695887 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.695952 kubelet[2505]: E0424 23:48:03.695898 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.696079 kubelet[2505]: E0424 23:48:03.696058 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.696079 kubelet[2505]: W0424 23:48:03.696074 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.696112 kubelet[2505]: E0424 23:48:03.696082 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.696254 kubelet[2505]: E0424 23:48:03.696246 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.696254 kubelet[2505]: W0424 23:48:03.696254 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.696293 kubelet[2505]: E0424 23:48:03.696260 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.696488 kubelet[2505]: E0424 23:48:03.696466 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.696488 kubelet[2505]: W0424 23:48:03.696482 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.696488 kubelet[2505]: E0424 23:48:03.696489 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.698588 kubelet[2505]: E0424 23:48:03.698529 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.698588 kubelet[2505]: W0424 23:48:03.698540 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.698588 kubelet[2505]: E0424 23:48:03.698550 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.698805 kubelet[2505]: I0424 23:48:03.698666 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8b177f61-af99-4e0d-af51-f699d327434d-registration-dir\") pod \"csi-node-driver-zpms9\" (UID: \"8b177f61-af99-4e0d-af51-f699d327434d\") " pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:03.698805 kubelet[2505]: E0424 23:48:03.698741 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.698805 kubelet[2505]: W0424 23:48:03.698746 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.698805 kubelet[2505]: E0424 23:48:03.698752 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.699001 kubelet[2505]: E0424 23:48:03.698980 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.699001 kubelet[2505]: W0424 23:48:03.698997 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.699090 kubelet[2505]: E0424 23:48:03.699004 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.701513 kubelet[2505]: E0424 23:48:03.701485 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.701581 kubelet[2505]: W0424 23:48:03.701526 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.701581 kubelet[2505]: E0424 23:48:03.701538 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.701987 kubelet[2505]: E0424 23:48:03.701961 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.701987 kubelet[2505]: W0424 23:48:03.701980 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.701987 kubelet[2505]: E0424 23:48:03.701988 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.702380 kubelet[2505]: E0424 23:48:03.702353 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.702380 kubelet[2505]: W0424 23:48:03.702370 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.702380 kubelet[2505]: E0424 23:48:03.702379 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.702574 kubelet[2505]: E0424 23:48:03.702561 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.702574 kubelet[2505]: W0424 23:48:03.702571 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.702641 kubelet[2505]: E0424 23:48:03.702578 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.702846 kubelet[2505]: E0424 23:48:03.702826 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.702846 kubelet[2505]: W0424 23:48:03.702842 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.702909 kubelet[2505]: E0424 23:48:03.702849 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.703018 kubelet[2505]: E0424 23:48:03.702997 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.703018 kubelet[2505]: W0424 23:48:03.703011 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.703018 kubelet[2505]: E0424 23:48:03.703017 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.703183 kubelet[2505]: E0424 23:48:03.703165 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.703183 kubelet[2505]: W0424 23:48:03.703180 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.703231 kubelet[2505]: E0424 23:48:03.703186 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.703437 kubelet[2505]: E0424 23:48:03.703412 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.703437 kubelet[2505]: W0424 23:48:03.703426 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.703437 kubelet[2505]: E0424 23:48:03.703433 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.705403 kubelet[2505]: E0424 23:48:03.705203 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.705403 kubelet[2505]: W0424 23:48:03.705368 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.705403 kubelet[2505]: E0424 23:48:03.705476 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.706852 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708585 kubelet[2505]: W0424 23:48:03.706871 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.706886 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.707449 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708585 kubelet[2505]: W0424 23:48:03.707460 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.707469 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.707725 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708585 kubelet[2505]: W0424 23:48:03.707730 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.707736 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708585 kubelet[2505]: E0424 23:48:03.707952 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708822 kubelet[2505]: W0424 23:48:03.707958 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708822 kubelet[2505]: E0424 23:48:03.707964 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708822 kubelet[2505]: E0424 23:48:03.708275 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708822 kubelet[2505]: W0424 23:48:03.708281 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708822 kubelet[2505]: E0424 23:48:03.708292 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708822 kubelet[2505]: E0424 23:48:03.708466 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708822 kubelet[2505]: W0424 23:48:03.708470 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708822 kubelet[2505]: E0424 23:48:03.708479 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708822 kubelet[2505]: E0424 23:48:03.708649 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708822 kubelet[2505]: W0424 23:48:03.708654 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708972 kubelet[2505]: E0424 23:48:03.708660 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.708972 kubelet[2505]: E0424 23:48:03.708862 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.708972 kubelet[2505]: W0424 23:48:03.708868 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.708972 kubelet[2505]: E0424 23:48:03.708873 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.709072 kubelet[2505]: E0424 23:48:03.709033 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.709072 kubelet[2505]: W0424 23:48:03.709038 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.709072 kubelet[2505]: E0424 23:48:03.709043 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.709216 kubelet[2505]: E0424 23:48:03.709194 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.709216 kubelet[2505]: W0424 23:48:03.709211 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.709348 kubelet[2505]: E0424 23:48:03.709217 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.709364 kubelet[2505]: E0424 23:48:03.709359 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.709381 kubelet[2505]: W0424 23:48:03.709364 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.709381 kubelet[2505]: E0424 23:48:03.709369 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.709534 kubelet[2505]: E0424 23:48:03.709508 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.709534 kubelet[2505]: W0424 23:48:03.709513 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.709534 kubelet[2505]: E0424 23:48:03.709519 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.709675 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711795 kubelet[2505]: W0424 23:48:03.709682 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.709688 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.709958 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711795 kubelet[2505]: W0424 23:48:03.709966 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.709972 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.710106 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711795 kubelet[2505]: W0424 23:48:03.710111 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.710116 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711795 kubelet[2505]: E0424 23:48:03.710329 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711988 kubelet[2505]: W0424 23:48:03.710334 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.711988 kubelet[2505]: E0424 23:48:03.710378 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711988 kubelet[2505]: E0424 23:48:03.710534 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711988 kubelet[2505]: W0424 23:48:03.710540 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.711988 kubelet[2505]: E0424 23:48:03.710545 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711988 kubelet[2505]: E0424 23:48:03.710698 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711988 kubelet[2505]: W0424 23:48:03.710704 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.711988 kubelet[2505]: E0424 23:48:03.710709 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.711988 kubelet[2505]: E0424 23:48:03.710983 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.711988 kubelet[2505]: W0424 23:48:03.710989 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.712149 kubelet[2505]: E0424 23:48:03.710996 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.712967 kubelet[2505]: E0424 23:48:03.712943 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.712967 kubelet[2505]: W0424 23:48:03.712962 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.713034 kubelet[2505]: E0424 23:48:03.712974 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.715426 kubelet[2505]: E0424 23:48:03.715337 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.715426 kubelet[2505]: W0424 23:48:03.715430 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.716096 kubelet[2505]: E0424 23:48:03.715483 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.716096 kubelet[2505]: E0424 23:48:03.715755 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.716096 kubelet[2505]: W0424 23:48:03.715825 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.716096 kubelet[2505]: E0424 23:48:03.715835 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.716378 kubelet[2505]: E0424 23:48:03.716352 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.716378 kubelet[2505]: W0424 23:48:03.716372 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.716425 kubelet[2505]: E0424 23:48:03.716383 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.716933 kubelet[2505]: E0424 23:48:03.716844 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.716933 kubelet[2505]: W0424 23:48:03.716853 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.716933 kubelet[2505]: E0424 23:48:03.716860 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.717424 kubelet[2505]: E0424 23:48:03.717381 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.717424 kubelet[2505]: W0424 23:48:03.717399 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.717424 kubelet[2505]: E0424 23:48:03.717407 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.717690 kubelet[2505]: E0424 23:48:03.717672 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.717690 kubelet[2505]: W0424 23:48:03.717685 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.717737 kubelet[2505]: E0424 23:48:03.717697 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.727062 kubelet[2505]: E0424 23:48:03.727035 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.727062 kubelet[2505]: W0424 23:48:03.727055 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.727188 kubelet[2505]: E0424 23:48:03.727071 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.764886 kubelet[2505]: E0424 23:48:03.764831 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:03.766526 containerd[1460]: time="2026-04-24T23:48:03.766453950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69494f8c7c-thqmq,Uid:9305ec80-f541-43f2-bef9-ff77f632d82f,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:03.796027 containerd[1460]: time="2026-04-24T23:48:03.795615018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:03.796979 containerd[1460]: time="2026-04-24T23:48:03.796854408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:03.796979 containerd[1460]: time="2026-04-24T23:48:03.796876768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:03.797601 containerd[1460]: time="2026-04-24T23:48:03.797001862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:03.810151 kubelet[2505]: E0424 23:48:03.810006 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.810151 kubelet[2505]: W0424 23:48:03.810046 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.810151 kubelet[2505]: E0424 23:48:03.810083 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.811032 kubelet[2505]: E0424 23:48:03.810452 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.811032 kubelet[2505]: W0424 23:48:03.810460 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.811032 kubelet[2505]: E0424 23:48:03.810470 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.811032 kubelet[2505]: E0424 23:48:03.810696 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.811032 kubelet[2505]: W0424 23:48:03.810702 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.811032 kubelet[2505]: E0424 23:48:03.810709 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.811032 kubelet[2505]: E0424 23:48:03.811025 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.811032 kubelet[2505]: W0424 23:48:03.811033 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.811291 kubelet[2505]: E0424 23:48:03.811043 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.811345 kubelet[2505]: E0424 23:48:03.811330 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.811345 kubelet[2505]: W0424 23:48:03.811343 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.811445 kubelet[2505]: E0424 23:48:03.811351 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.811795 kubelet[2505]: E0424 23:48:03.811756 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.811825 kubelet[2505]: W0424 23:48:03.811796 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.811825 kubelet[2505]: E0424 23:48:03.811806 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.812086 kubelet[2505]: E0424 23:48:03.812071 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.812086 kubelet[2505]: W0424 23:48:03.812084 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.812169 kubelet[2505]: E0424 23:48:03.812092 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.812402 kubelet[2505]: E0424 23:48:03.812382 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.812437 kubelet[2505]: W0424 23:48:03.812426 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.812455 kubelet[2505]: E0424 23:48:03.812439 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.812646 kubelet[2505]: E0424 23:48:03.812632 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.812646 kubelet[2505]: W0424 23:48:03.812645 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.812685 kubelet[2505]: E0424 23:48:03.812653 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.812946 kubelet[2505]: E0424 23:48:03.812931 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.812946 kubelet[2505]: W0424 23:48:03.812945 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.812981 kubelet[2505]: E0424 23:48:03.812952 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.813147 kubelet[2505]: E0424 23:48:03.813118 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.813147 kubelet[2505]: W0424 23:48:03.813145 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.813193 kubelet[2505]: E0424 23:48:03.813151 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.813396 kubelet[2505]: E0424 23:48:03.813382 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.813396 kubelet[2505]: W0424 23:48:03.813394 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.813448 kubelet[2505]: E0424 23:48:03.813401 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.813619 kubelet[2505]: E0424 23:48:03.813606 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.813619 kubelet[2505]: W0424 23:48:03.813619 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.813662 kubelet[2505]: E0424 23:48:03.813626 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.813961 kubelet[2505]: E0424 23:48:03.813941 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.813983 kubelet[2505]: W0424 23:48:03.813961 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.813983 kubelet[2505]: E0424 23:48:03.813973 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.814156 kubelet[2505]: E0424 23:48:03.814143 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.814156 kubelet[2505]: W0424 23:48:03.814154 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.814196 kubelet[2505]: E0424 23:48:03.814161 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814352 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815312 kubelet[2505]: W0424 23:48:03.814361 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814367 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814514 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815312 kubelet[2505]: W0424 23:48:03.814519 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814525 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814689 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815312 kubelet[2505]: W0424 23:48:03.814699 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814705 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815312 kubelet[2505]: E0424 23:48:03.814896 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815541 kubelet[2505]: W0424 23:48:03.814901 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815541 kubelet[2505]: E0424 23:48:03.814907 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815541 kubelet[2505]: E0424 23:48:03.815103 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815541 kubelet[2505]: W0424 23:48:03.815112 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815541 kubelet[2505]: E0424 23:48:03.815118 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815541 kubelet[2505]: E0424 23:48:03.815357 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815541 kubelet[2505]: W0424 23:48:03.815363 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815541 kubelet[2505]: E0424 23:48:03.815369 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815657 kubelet[2505]: E0424 23:48:03.815650 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815673 kubelet[2505]: W0424 23:48:03.815658 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.815800 kubelet[2505]: E0424 23:48:03.815665 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.815974 kubelet[2505]: E0424 23:48:03.815960 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.815974 kubelet[2505]: W0424 23:48:03.815972 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.816074 kubelet[2505]: E0424 23:48:03.815978 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.816196 kubelet[2505]: E0424 23:48:03.816174 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.816196 kubelet[2505]: W0424 23:48:03.816183 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.816196 kubelet[2505]: E0424 23:48:03.816189 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.816424 kubelet[2505]: E0424 23:48:03.816411 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.816424 kubelet[2505]: W0424 23:48:03.816422 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.816468 kubelet[2505]: E0424 23:48:03.816428 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.823646 kubelet[2505]: E0424 23:48:03.823623 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:03.823646 kubelet[2505]: W0424 23:48:03.823641 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:03.823646 kubelet[2505]: E0424 23:48:03.823652 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:03.829558 containerd[1460]: time="2026-04-24T23:48:03.829505915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8dlbs,Uid:1d1719a2-0fd9-4b02-afe8-a27d0ab49e09,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:03.830985 systemd[1]: Started cri-containerd-8aa94a7017c4dde48a9dd3fbd00a3e122d0cd7d1b955b72260f80a337bc4d5d6.scope - libcontainer container 8aa94a7017c4dde48a9dd3fbd00a3e122d0cd7d1b955b72260f80a337bc4d5d6. Apr 24 23:48:03.883486 containerd[1460]: time="2026-04-24T23:48:03.880892419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:03.883486 containerd[1460]: time="2026-04-24T23:48:03.881060780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:03.883486 containerd[1460]: time="2026-04-24T23:48:03.881085375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:03.883486 containerd[1460]: time="2026-04-24T23:48:03.881299696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:03.893388 containerd[1460]: time="2026-04-24T23:48:03.893282464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69494f8c7c-thqmq,Uid:9305ec80-f541-43f2-bef9-ff77f632d82f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8aa94a7017c4dde48a9dd3fbd00a3e122d0cd7d1b955b72260f80a337bc4d5d6\"" Apr 24 23:48:03.899336 kubelet[2505]: E0424 23:48:03.898709 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:03.906617 containerd[1460]: time="2026-04-24T23:48:03.906423924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:48:03.939968 systemd[1]: Started cri-containerd-d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755.scope - libcontainer container d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755. Apr 24 23:48:03.979407 containerd[1460]: time="2026-04-24T23:48:03.979033555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8dlbs,Uid:1d1719a2-0fd9-4b02-afe8-a27d0ab49e09,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\"" Apr 24 23:48:05.261521 kubelet[2505]: E0424 23:48:05.260861 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:05.629607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685877428.mount: Deactivated successfully. Apr 24 23:48:06.697920 containerd[1460]: time="2026-04-24T23:48:06.697832424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:06.700196 containerd[1460]: time="2026-04-24T23:48:06.699948740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:48:06.701071 containerd[1460]: time="2026-04-24T23:48:06.701043764Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:06.707386 containerd[1460]: time="2026-04-24T23:48:06.707256780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:06.708675 containerd[1460]: time="2026-04-24T23:48:06.708629314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.802045258s" Apr 24 23:48:06.708721 containerd[1460]: time="2026-04-24T23:48:06.708694400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:48:06.715379 containerd[1460]: time="2026-04-24T23:48:06.715353278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:48:06.750238 containerd[1460]: time="2026-04-24T23:48:06.750187937Z" level=info msg="CreateContainer within sandbox \"8aa94a7017c4dde48a9dd3fbd00a3e122d0cd7d1b955b72260f80a337bc4d5d6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:48:06.762412 containerd[1460]: time="2026-04-24T23:48:06.762350297Z" level=info msg="CreateContainer within sandbox \"8aa94a7017c4dde48a9dd3fbd00a3e122d0cd7d1b955b72260f80a337bc4d5d6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a6395a702d450e5e40883ae63938e07a08d79611fcb847d11f1c9580eac5aae\"" Apr 24 23:48:06.763121 containerd[1460]: time="2026-04-24T23:48:06.763071575Z" level=info msg="StartContainer for \"4a6395a702d450e5e40883ae63938e07a08d79611fcb847d11f1c9580eac5aae\"" Apr 24 23:48:06.795577 systemd[1]: Started cri-containerd-4a6395a702d450e5e40883ae63938e07a08d79611fcb847d11f1c9580eac5aae.scope - libcontainer container 4a6395a702d450e5e40883ae63938e07a08d79611fcb847d11f1c9580eac5aae. Apr 24 23:48:06.865447 containerd[1460]: time="2026-04-24T23:48:06.865134538Z" level=info msg="StartContainer for \"4a6395a702d450e5e40883ae63938e07a08d79611fcb847d11f1c9580eac5aae\" returns successfully" Apr 24 23:48:07.264941 kubelet[2505]: E0424 23:48:07.264714 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:07.390272 kubelet[2505]: E0424 23:48:07.390071 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:07.412063 kubelet[2505]: I0424 23:48:07.411766 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69494f8c7c-thqmq" podStartSLOduration=1.596953716 podStartE2EDuration="4.411722218s" podCreationTimestamp="2026-04-24 23:48:03 +0000 UTC" firstStartedPulling="2026-04-24 23:48:03.90033062 +0000 UTC m=+17.741257700" lastFinishedPulling="2026-04-24 23:48:06.715099116 +0000 UTC m=+20.556026202" observedRunningTime="2026-04-24 23:48:07.411255729 +0000 UTC m=+21.252182810" watchObservedRunningTime="2026-04-24 23:48:07.411722218 +0000 UTC m=+21.252649309" Apr 24 23:48:07.481220 kubelet[2505]: E0424 23:48:07.480857 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.481220 kubelet[2505]: W0424 23:48:07.481193 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.482643 kubelet[2505]: E0424 23:48:07.482597 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.483048 kubelet[2505]: E0424 23:48:07.483024 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.483048 kubelet[2505]: W0424 23:48:07.483040 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.483048 kubelet[2505]: E0424 23:48:07.483054 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.483384 kubelet[2505]: E0424 23:48:07.483343 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.483384 kubelet[2505]: W0424 23:48:07.483352 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.483384 kubelet[2505]: E0424 23:48:07.483363 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.483691 kubelet[2505]: E0424 23:48:07.483675 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.483691 kubelet[2505]: W0424 23:48:07.483688 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.483756 kubelet[2505]: E0424 23:48:07.483699 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.484248 kubelet[2505]: E0424 23:48:07.484207 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.484248 kubelet[2505]: W0424 23:48:07.484232 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.484248 kubelet[2505]: E0424 23:48:07.484244 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.484440 kubelet[2505]: E0424 23:48:07.484385 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.484440 kubelet[2505]: W0424 23:48:07.484390 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.484440 kubelet[2505]: E0424 23:48:07.484395 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.484532 kubelet[2505]: E0424 23:48:07.484523 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.484532 kubelet[2505]: W0424 23:48:07.484527 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.484593 kubelet[2505]: E0424 23:48:07.484533 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.484670 kubelet[2505]: E0424 23:48:07.484656 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.484670 kubelet[2505]: W0424 23:48:07.484666 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.484739 kubelet[2505]: E0424 23:48:07.484671 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.484854 kubelet[2505]: E0424 23:48:07.484841 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.484854 kubelet[2505]: W0424 23:48:07.484851 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.484928 kubelet[2505]: E0424 23:48:07.484857 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.484990 kubelet[2505]: E0424 23:48:07.484978 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.484990 kubelet[2505]: W0424 23:48:07.484987 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.485052 kubelet[2505]: E0424 23:48:07.484992 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.485130 kubelet[2505]: E0424 23:48:07.485117 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.485130 kubelet[2505]: W0424 23:48:07.485126 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.485209 kubelet[2505]: E0424 23:48:07.485132 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.485285 kubelet[2505]: E0424 23:48:07.485271 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.485285 kubelet[2505]: W0424 23:48:07.485281 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.485345 kubelet[2505]: E0424 23:48:07.485286 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.485525 kubelet[2505]: E0424 23:48:07.485490 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.485525 kubelet[2505]: W0424 23:48:07.485506 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.485525 kubelet[2505]: E0424 23:48:07.485519 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.485686 kubelet[2505]: E0424 23:48:07.485654 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.485686 kubelet[2505]: W0424 23:48:07.485666 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.485686 kubelet[2505]: E0424 23:48:07.485671 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.485836 kubelet[2505]: E0424 23:48:07.485823 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.485836 kubelet[2505]: W0424 23:48:07.485833 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.485907 kubelet[2505]: E0424 23:48:07.485838 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.586488 kubelet[2505]: E0424 23:48:07.585909 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.586488 kubelet[2505]: W0424 23:48:07.585948 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.586488 kubelet[2505]: E0424 23:48:07.585988 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.586488 kubelet[2505]: E0424 23:48:07.586333 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.586488 kubelet[2505]: W0424 23:48:07.586339 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.586488 kubelet[2505]: E0424 23:48:07.586348 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.588561 kubelet[2505]: E0424 23:48:07.588520 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.588561 kubelet[2505]: W0424 23:48:07.588540 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.588656 kubelet[2505]: E0424 23:48:07.588553 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.589000 kubelet[2505]: E0424 23:48:07.588985 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.589000 kubelet[2505]: W0424 23:48:07.588997 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.589062 kubelet[2505]: E0424 23:48:07.589005 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.589194 kubelet[2505]: E0424 23:48:07.589182 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.589194 kubelet[2505]: W0424 23:48:07.589192 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.589231 kubelet[2505]: E0424 23:48:07.589199 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.589364 kubelet[2505]: E0424 23:48:07.589353 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.589364 kubelet[2505]: W0424 23:48:07.589363 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.589411 kubelet[2505]: E0424 23:48:07.589370 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.589524 kubelet[2505]: E0424 23:48:07.589512 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.589524 kubelet[2505]: W0424 23:48:07.589522 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.589554 kubelet[2505]: E0424 23:48:07.589528 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.589706 kubelet[2505]: E0424 23:48:07.589694 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.589706 kubelet[2505]: W0424 23:48:07.589705 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.589744 kubelet[2505]: E0424 23:48:07.589711 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.590062 kubelet[2505]: E0424 23:48:07.590025 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.590062 kubelet[2505]: W0424 23:48:07.590046 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.590062 kubelet[2505]: E0424 23:48:07.590058 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.590247 kubelet[2505]: E0424 23:48:07.590235 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.590247 kubelet[2505]: W0424 23:48:07.590246 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.590342 kubelet[2505]: E0424 23:48:07.590254 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.590402 kubelet[2505]: E0424 23:48:07.590390 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.590402 kubelet[2505]: W0424 23:48:07.590400 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.590435 kubelet[2505]: E0424 23:48:07.590408 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.590666 kubelet[2505]: E0424 23:48:07.590653 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.590666 kubelet[2505]: W0424 23:48:07.590664 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.590718 kubelet[2505]: E0424 23:48:07.590670 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.590863 kubelet[2505]: E0424 23:48:07.590851 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.590863 kubelet[2505]: W0424 23:48:07.590862 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.590896 kubelet[2505]: E0424 23:48:07.590868 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.591015 kubelet[2505]: E0424 23:48:07.591004 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.591039 kubelet[2505]: W0424 23:48:07.591014 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.591039 kubelet[2505]: E0424 23:48:07.591020 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.591230 kubelet[2505]: E0424 23:48:07.591217 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.591230 kubelet[2505]: W0424 23:48:07.591229 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.591316 kubelet[2505]: E0424 23:48:07.591236 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.591393 kubelet[2505]: E0424 23:48:07.591382 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.591393 kubelet[2505]: W0424 23:48:07.591391 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.591430 kubelet[2505]: E0424 23:48:07.591396 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.591619 kubelet[2505]: E0424 23:48:07.591603 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.591640 kubelet[2505]: W0424 23:48:07.591618 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.591640 kubelet[2505]: E0424 23:48:07.591629 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:07.591880 kubelet[2505]: E0424 23:48:07.591868 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:07.591903 kubelet[2505]: W0424 23:48:07.591880 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:07.591903 kubelet[2505]: E0424 23:48:07.591887 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.393827 kubelet[2505]: I0424 23:48:08.393703 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:48:08.395594 kubelet[2505]: E0424 23:48:08.394526 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:08.425159 containerd[1460]: time="2026-04-24T23:48:08.424842779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:48:08.425159 containerd[1460]: time="2026-04-24T23:48:08.425068359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:08.427119 containerd[1460]: time="2026-04-24T23:48:08.426700355Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:08.428535 containerd[1460]: time="2026-04-24T23:48:08.428489640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:08.429126 containerd[1460]: time="2026-04-24T23:48:08.429081988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.713704133s" Apr 24 23:48:08.429162 containerd[1460]: time="2026-04-24T23:48:08.429142204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:48:08.434385 containerd[1460]: time="2026-04-24T23:48:08.434347565Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:48:08.459446 containerd[1460]: time="2026-04-24T23:48:08.459326884Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1\"" Apr 24 23:48:08.461224 containerd[1460]: time="2026-04-24T23:48:08.461059705Z" level=info msg="StartContainer for \"925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1\"" Apr 24 23:48:08.493697 kubelet[2505]: E0424 23:48:08.493493 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.493697 kubelet[2505]: W0424 23:48:08.493556 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.493697 kubelet[2505]: E0424 23:48:08.493632 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.493866 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.496407 kubelet[2505]: W0424 23:48:08.493873 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.493880 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.494092 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.496407 kubelet[2505]: W0424 23:48:08.494099 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.494107 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.494258 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.496407 kubelet[2505]: W0424 23:48:08.494268 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.494275 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.496407 kubelet[2505]: E0424 23:48:08.494410 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.493949 systemd[1]: Started cri-containerd-925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1.scope - libcontainer container 925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1. Apr 24 23:48:08.497006 kubelet[2505]: W0424 23:48:08.494416 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497006 kubelet[2505]: E0424 23:48:08.494422 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497006 kubelet[2505]: E0424 23:48:08.494552 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497006 kubelet[2505]: W0424 23:48:08.494558 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497006 kubelet[2505]: E0424 23:48:08.494565 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497006 kubelet[2505]: E0424 23:48:08.494722 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497006 kubelet[2505]: W0424 23:48:08.494727 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497006 kubelet[2505]: E0424 23:48:08.494732 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497006 kubelet[2505]: E0424 23:48:08.494884 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497006 kubelet[2505]: W0424 23:48:08.494890 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.494898 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.495060 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497274 kubelet[2505]: W0424 23:48:08.495065 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.495070 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.495200 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497274 kubelet[2505]: W0424 23:48:08.495209 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.495217 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.495353 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497274 kubelet[2505]: W0424 23:48:08.495358 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497274 kubelet[2505]: E0424 23:48:08.495363 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495482 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497436 kubelet[2505]: W0424 23:48:08.495487 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495492 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495638 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497436 kubelet[2505]: W0424 23:48:08.495644 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495649 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495786 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497436 kubelet[2505]: W0424 23:48:08.495795 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495800 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.497436 kubelet[2505]: E0424 23:48:08.495973 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.497590 kubelet[2505]: W0424 23:48:08.495977 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.497590 kubelet[2505]: E0424 23:48:08.495982 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.498161 kubelet[2505]: E0424 23:48:08.498139 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.498161 kubelet[2505]: W0424 23:48:08.498157 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.498344 kubelet[2505]: E0424 23:48:08.498167 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.498577 kubelet[2505]: E0424 23:48:08.498480 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.498577 kubelet[2505]: W0424 23:48:08.498491 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.498577 kubelet[2505]: E0424 23:48:08.498498 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.498824 kubelet[2505]: E0424 23:48:08.498730 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.498824 kubelet[2505]: W0424 23:48:08.498738 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.498824 kubelet[2505]: E0424 23:48:08.498746 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.499819 kubelet[2505]: E0424 23:48:08.499135 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.499819 kubelet[2505]: W0424 23:48:08.499502 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.499819 kubelet[2505]: E0424 23:48:08.499515 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.500128 kubelet[2505]: E0424 23:48:08.499875 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.500128 kubelet[2505]: W0424 23:48:08.499886 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.500128 kubelet[2505]: E0424 23:48:08.499897 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.500128 kubelet[2505]: E0424 23:48:08.500056 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.500128 kubelet[2505]: W0424 23:48:08.500062 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.500128 kubelet[2505]: E0424 23:48:08.500069 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.501735 kubelet[2505]: E0424 23:48:08.501717 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.501735 kubelet[2505]: W0424 23:48:08.501729 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.501735 kubelet[2505]: E0424 23:48:08.501738 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.502030 kubelet[2505]: E0424 23:48:08.502016 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.502030 kubelet[2505]: W0424 23:48:08.502028 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.502094 kubelet[2505]: E0424 23:48:08.502036 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.502228 kubelet[2505]: E0424 23:48:08.502213 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.502228 kubelet[2505]: W0424 23:48:08.502227 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.502293 kubelet[2505]: E0424 23:48:08.502248 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.502624 kubelet[2505]: E0424 23:48:08.502607 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.502665 kubelet[2505]: W0424 23:48:08.502626 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.502665 kubelet[2505]: E0424 23:48:08.502638 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.502927 kubelet[2505]: E0424 23:48:08.502907 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.502927 kubelet[2505]: W0424 23:48:08.502923 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.503029 kubelet[2505]: E0424 23:48:08.502935 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.504739 kubelet[2505]: E0424 23:48:08.504568 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.504739 kubelet[2505]: W0424 23:48:08.504693 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.505866 kubelet[2505]: E0424 23:48:08.504952 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.506445 kubelet[2505]: E0424 23:48:08.506066 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.506445 kubelet[2505]: W0424 23:48:08.506079 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.506445 kubelet[2505]: E0424 23:48:08.506215 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.507327 kubelet[2505]: E0424 23:48:08.507312 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.507327 kubelet[2505]: W0424 23:48:08.507324 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.507408 kubelet[2505]: E0424 23:48:08.507333 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.507568 kubelet[2505]: E0424 23:48:08.507555 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.507619 kubelet[2505]: W0424 23:48:08.507568 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.507619 kubelet[2505]: E0424 23:48:08.507575 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.507989 kubelet[2505]: E0424 23:48:08.507957 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.507989 kubelet[2505]: W0424 23:48:08.507974 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.507989 kubelet[2505]: E0424 23:48:08.507981 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.508290 kubelet[2505]: E0424 23:48:08.508264 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.508318 kubelet[2505]: W0424 23:48:08.508295 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.508318 kubelet[2505]: E0424 23:48:08.508302 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.508705 kubelet[2505]: E0424 23:48:08.508680 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:48:08.508725 kubelet[2505]: W0424 23:48:08.508715 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:48:08.508743 kubelet[2505]: E0424 23:48:08.508725 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:48:08.533006 containerd[1460]: time="2026-04-24T23:48:08.532322837Z" level=info msg="StartContainer for \"925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1\" returns successfully" Apr 24 23:48:08.543586 systemd[1]: cri-containerd-925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1.scope: Deactivated successfully. Apr 24 23:48:08.571727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1-rootfs.mount: Deactivated successfully. Apr 24 23:48:08.586256 containerd[1460]: time="2026-04-24T23:48:08.581223697Z" level=info msg="shim disconnected" id=925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1 namespace=k8s.io Apr 24 23:48:08.586256 containerd[1460]: time="2026-04-24T23:48:08.586271426Z" level=warning msg="cleaning up after shim disconnected" id=925ca0f413b609d30b39418b9efe3f1bf183140a1b3ceb800cee792c1917b1c1 namespace=k8s.io Apr 24 23:48:08.586256 containerd[1460]: time="2026-04-24T23:48:08.586320649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:48:09.261656 kubelet[2505]: E0424 23:48:09.261506 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:09.398732 containerd[1460]: time="2026-04-24T23:48:09.398100973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:48:11.262510 kubelet[2505]: E0424 23:48:11.262134 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:13.261873 kubelet[2505]: E0424 23:48:13.261617 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:15.260425 kubelet[2505]: E0424 23:48:15.260339 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:16.051693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129322227.mount: Deactivated successfully. Apr 24 23:48:16.114393 containerd[1460]: time="2026-04-24T23:48:16.114216133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:16.115660 containerd[1460]: time="2026-04-24T23:48:16.114934754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:48:16.115839 containerd[1460]: time="2026-04-24T23:48:16.115815790Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:16.117667 containerd[1460]: time="2026-04-24T23:48:16.117596424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:16.118072 containerd[1460]: time="2026-04-24T23:48:16.118046368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.719790567s" Apr 24 23:48:16.118165 containerd[1460]: time="2026-04-24T23:48:16.118080869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:48:16.141826 containerd[1460]: time="2026-04-24T23:48:16.141712615Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:48:16.176346 containerd[1460]: time="2026-04-24T23:48:16.176262922Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7\"" Apr 24 23:48:16.177687 containerd[1460]: time="2026-04-24T23:48:16.177646818Z" level=info msg="StartContainer for \"f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7\"" Apr 24 23:48:16.251137 systemd[1]: Started cri-containerd-f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7.scope - libcontainer container f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7. Apr 24 23:48:16.283327 containerd[1460]: time="2026-04-24T23:48:16.283083705Z" level=info msg="StartContainer for \"f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7\" returns successfully" Apr 24 23:48:16.375170 systemd[1]: cri-containerd-f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7.scope: Deactivated successfully. Apr 24 23:48:16.515201 containerd[1460]: time="2026-04-24T23:48:16.514855914Z" level=info msg="shim disconnected" id=f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7 namespace=k8s.io Apr 24 23:48:16.515201 containerd[1460]: time="2026-04-24T23:48:16.515043715Z" level=warning msg="cleaning up after shim disconnected" id=f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7 namespace=k8s.io Apr 24 23:48:16.515201 containerd[1460]: time="2026-04-24T23:48:16.515066320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:48:17.053260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f80474a69e9f615f9b2220492cbf0e5683e58307bb2932b6cebc431934442ef7-rootfs.mount: Deactivated successfully. Apr 24 23:48:17.261805 kubelet[2505]: E0424 23:48:17.261347 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:17.457907 containerd[1460]: time="2026-04-24T23:48:17.457812090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:48:19.263081 kubelet[2505]: E0424 23:48:19.262862 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:21.262642 kubelet[2505]: E0424 23:48:21.261548 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:22.192332 containerd[1460]: time="2026-04-24T23:48:22.192164302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:22.194156 containerd[1460]: time="2026-04-24T23:48:22.192673556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:48:22.194156 containerd[1460]: time="2026-04-24T23:48:22.194094695Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:22.198275 containerd[1460]: time="2026-04-24T23:48:22.197592540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:22.199518 containerd[1460]: time="2026-04-24T23:48:22.198212904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.74028019s" Apr 24 23:48:22.199518 containerd[1460]: time="2026-04-24T23:48:22.199318795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:48:22.221030 containerd[1460]: time="2026-04-24T23:48:22.220926421Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:48:22.240635 containerd[1460]: time="2026-04-24T23:48:22.240420857Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36\"" Apr 24 23:48:22.241582 containerd[1460]: time="2026-04-24T23:48:22.241558503Z" level=info msg="StartContainer for \"a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36\"" Apr 24 23:48:22.298194 systemd[1]: Started cri-containerd-a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36.scope - libcontainer container a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36. Apr 24 23:48:22.347746 containerd[1460]: time="2026-04-24T23:48:22.347481938Z" level=info msg="StartContainer for \"a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36\" returns successfully" Apr 24 23:48:23.154126 systemd[1]: cri-containerd-a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36.scope: Deactivated successfully. Apr 24 23:48:23.192464 kubelet[2505]: I0424 23:48:23.191423 2505 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:48:23.194153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36-rootfs.mount: Deactivated successfully. Apr 24 23:48:23.196244 containerd[1460]: time="2026-04-24T23:48:23.196083847Z" level=info msg="shim disconnected" id=a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36 namespace=k8s.io Apr 24 23:48:23.196244 containerd[1460]: time="2026-04-24T23:48:23.196241425Z" level=warning msg="cleaning up after shim disconnected" id=a0ade74430b8df67659c74cd9d4c934569f7dd3ad6dce2cac773b817c9d94a36 namespace=k8s.io Apr 24 23:48:23.196651 containerd[1460]: time="2026-04-24T23:48:23.196260340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:48:23.292681 systemd[1]: Created slice kubepods-besteffort-podc75b49fd_5383_4d1b_9dab_badcf7790241.slice - libcontainer container kubepods-besteffort-podc75b49fd_5383_4d1b_9dab_badcf7790241.slice. Apr 24 23:48:23.358610 kubelet[2505]: I0424 23:48:23.358059 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8wm\" (UniqueName: \"kubernetes.io/projected/ab27142a-e4c1-4ad9-85f0-56df27f44b76-kube-api-access-sc8wm\") pod \"calico-apiserver-647b959c57-h2mjz\" (UID: \"ab27142a-e4c1-4ad9-85f0-56df27f44b76\") " pod="calico-system/calico-apiserver-647b959c57-h2mjz" Apr 24 23:48:23.358610 kubelet[2505]: I0424 23:48:23.358116 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czzkf\" (UniqueName: \"kubernetes.io/projected/c75b49fd-5383-4d1b-9dab-badcf7790241-kube-api-access-czzkf\") pod \"calico-kube-controllers-58c4885b55-2kpm5\" (UID: \"c75b49fd-5383-4d1b-9dab-badcf7790241\") " pod="calico-system/calico-kube-controllers-58c4885b55-2kpm5" Apr 24 23:48:23.358610 kubelet[2505]: I0424 23:48:23.358133 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab27142a-e4c1-4ad9-85f0-56df27f44b76-calico-apiserver-certs\") pod \"calico-apiserver-647b959c57-h2mjz\" (UID: \"ab27142a-e4c1-4ad9-85f0-56df27f44b76\") " pod="calico-system/calico-apiserver-647b959c57-h2mjz" Apr 24 23:48:23.358610 kubelet[2505]: I0424 23:48:23.358148 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75b49fd-5383-4d1b-9dab-badcf7790241-tigera-ca-bundle\") pod \"calico-kube-controllers-58c4885b55-2kpm5\" (UID: \"c75b49fd-5383-4d1b-9dab-badcf7790241\") " pod="calico-system/calico-kube-controllers-58c4885b55-2kpm5" Apr 24 23:48:23.370099 systemd[1]: Created slice kubepods-besteffort-podab27142a_e4c1_4ad9_85f0_56df27f44b76.slice - libcontainer container kubepods-besteffort-podab27142a_e4c1_4ad9_85f0_56df27f44b76.slice. Apr 24 23:48:23.377450 systemd[1]: Created slice kubepods-besteffort-pod8b177f61_af99_4e0d_af51_f699d327434d.slice - libcontainer container kubepods-besteffort-pod8b177f61_af99_4e0d_af51_f699d327434d.slice. Apr 24 23:48:23.392943 containerd[1460]: time="2026-04-24T23:48:23.390574408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpms9,Uid:8b177f61-af99-4e0d-af51-f699d327434d,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:23.391061 systemd[1]: Created slice kubepods-besteffort-pod76788716_614e_4c65_978c_a90311cc57b5.slice - libcontainer container kubepods-besteffort-pod76788716_614e_4c65_978c_a90311cc57b5.slice. Apr 24 23:48:23.400979 systemd[1]: Created slice kubepods-burstable-pod6df4f42b_ef79_4fc6_af47_4c5bb2c7dea4.slice - libcontainer container kubepods-burstable-pod6df4f42b_ef79_4fc6_af47_4c5bb2c7dea4.slice. Apr 24 23:48:23.434175 systemd[1]: Created slice kubepods-burstable-pod730a30d0_cc29_4d69_b761_41db17443064.slice - libcontainer container kubepods-burstable-pod730a30d0_cc29_4d69_b761_41db17443064.slice. Apr 24 23:48:23.451112 systemd[1]: Created slice kubepods-besteffort-poddc5035c7_812a_4219_b111_db043ba6addb.slice - libcontainer container kubepods-besteffort-poddc5035c7_812a_4219_b111_db043ba6addb.slice. Apr 24 23:48:23.459392 kubelet[2505]: I0424 23:48:23.459320 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-nginx-config\") pod \"whisker-696958b597-pjb8q\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " pod="calico-system/whisker-696958b597-pjb8q" Apr 24 23:48:23.467416 kubelet[2505]: I0424 23:48:23.465620 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxfb\" (UniqueName: \"kubernetes.io/projected/dc5035c7-812a-4219-b111-db043ba6addb-kube-api-access-qnxfb\") pod \"whisker-696958b597-pjb8q\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " pod="calico-system/whisker-696958b597-pjb8q" Apr 24 23:48:23.467416 kubelet[2505]: I0424 23:48:23.465680 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/730a30d0-cc29-4d69-b761-41db17443064-config-volume\") pod \"coredns-674b8bbfcf-6fp8r\" (UID: \"730a30d0-cc29-4d69-b761-41db17443064\") " pod="kube-system/coredns-674b8bbfcf-6fp8r" Apr 24 23:48:23.467416 kubelet[2505]: I0424 23:48:23.465698 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5035c7-812a-4219-b111-db043ba6addb-whisker-backend-key-pair\") pod \"whisker-696958b597-pjb8q\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " pod="calico-system/whisker-696958b597-pjb8q" Apr 24 23:48:23.467416 kubelet[2505]: I0424 23:48:23.465745 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4-config-volume\") pod \"coredns-674b8bbfcf-bfrts\" (UID: \"6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4\") " pod="kube-system/coredns-674b8bbfcf-bfrts" Apr 24 23:48:23.467416 kubelet[2505]: I0424 23:48:23.465757 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppwn\" (UniqueName: \"kubernetes.io/projected/6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4-kube-api-access-mppwn\") pod \"coredns-674b8bbfcf-bfrts\" (UID: \"6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4\") " pod="kube-system/coredns-674b8bbfcf-bfrts" Apr 24 23:48:23.475956 kubelet[2505]: I0424 23:48:23.465805 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-whisker-ca-bundle\") pod \"whisker-696958b597-pjb8q\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " pod="calico-system/whisker-696958b597-pjb8q" Apr 24 23:48:23.475956 kubelet[2505]: I0424 23:48:23.465895 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76788716-614e-4c65-978c-a90311cc57b5-config\") pod \"goldmane-5b85766d88-8djts\" (UID: \"76788716-614e-4c65-978c-a90311cc57b5\") " pod="calico-system/goldmane-5b85766d88-8djts" Apr 24 23:48:23.475956 kubelet[2505]: I0424 23:48:23.465923 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tbp\" (UniqueName: \"kubernetes.io/projected/76788716-614e-4c65-978c-a90311cc57b5-kube-api-access-74tbp\") pod \"goldmane-5b85766d88-8djts\" (UID: \"76788716-614e-4c65-978c-a90311cc57b5\") " pod="calico-system/goldmane-5b85766d88-8djts" Apr 24 23:48:23.475956 kubelet[2505]: I0424 23:48:23.465946 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/76788716-614e-4c65-978c-a90311cc57b5-goldmane-key-pair\") pod \"goldmane-5b85766d88-8djts\" (UID: \"76788716-614e-4c65-978c-a90311cc57b5\") " pod="calico-system/goldmane-5b85766d88-8djts" Apr 24 23:48:23.475956 kubelet[2505]: I0424 23:48:23.466028 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrpb\" (UniqueName: \"kubernetes.io/projected/730a30d0-cc29-4d69-b761-41db17443064-kube-api-access-nxrpb\") pod \"coredns-674b8bbfcf-6fp8r\" (UID: \"730a30d0-cc29-4d69-b761-41db17443064\") " pod="kube-system/coredns-674b8bbfcf-6fp8r" Apr 24 23:48:23.473585 systemd[1]: Created slice kubepods-besteffort-podc44becb5_fcd6_4bd1_91f5_38af42d74697.slice - libcontainer container kubepods-besteffort-podc44becb5_fcd6_4bd1_91f5_38af42d74697.slice. Apr 24 23:48:23.487527 kubelet[2505]: I0424 23:48:23.466058 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76788716-614e-4c65-978c-a90311cc57b5-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-8djts\" (UID: \"76788716-614e-4c65-978c-a90311cc57b5\") " pod="calico-system/goldmane-5b85766d88-8djts" Apr 24 23:48:23.487527 kubelet[2505]: I0424 23:48:23.466082 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c44becb5-fcd6-4bd1-91f5-38af42d74697-calico-apiserver-certs\") pod \"calico-apiserver-647b959c57-7mztb\" (UID: \"c44becb5-fcd6-4bd1-91f5-38af42d74697\") " pod="calico-system/calico-apiserver-647b959c57-7mztb" Apr 24 23:48:23.487527 kubelet[2505]: I0424 23:48:23.466095 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmkp2\" (UniqueName: \"kubernetes.io/projected/c44becb5-fcd6-4bd1-91f5-38af42d74697-kube-api-access-lmkp2\") pod \"calico-apiserver-647b959c57-7mztb\" (UID: \"c44becb5-fcd6-4bd1-91f5-38af42d74697\") " pod="calico-system/calico-apiserver-647b959c57-7mztb" Apr 24 23:48:23.575477 containerd[1460]: time="2026-04-24T23:48:23.571148594Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:48:23.631763 containerd[1460]: time="2026-04-24T23:48:23.631435034Z" level=info msg="CreateContainer within sandbox \"d5d7d91c9cb9b2cdc8bbd999b612d1556d2cef33c4264d797bfed0d9099b9755\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd94b3112cb421009f8422ba213dda6701d5ca485de9b57bb8abe1cc65101258\"" Apr 24 23:48:23.637127 containerd[1460]: time="2026-04-24T23:48:23.636992806Z" level=info msg="StartContainer for \"bd94b3112cb421009f8422ba213dda6701d5ca485de9b57bb8abe1cc65101258\"" Apr 24 23:48:23.647502 containerd[1460]: time="2026-04-24T23:48:23.647136474Z" level=error msg="Failed to destroy network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.649595 containerd[1460]: time="2026-04-24T23:48:23.649539523Z" level=error msg="encountered an error cleaning up failed sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.649827 containerd[1460]: time="2026-04-24T23:48:23.649797678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpms9,Uid:8b177f61-af99-4e0d-af51-f699d327434d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.659019 kubelet[2505]: E0424 23:48:23.658352 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.659019 kubelet[2505]: E0424 23:48:23.658605 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:23.659019 kubelet[2505]: E0424 23:48:23.658644 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zpms9" Apr 24 23:48:23.659314 kubelet[2505]: E0424 23:48:23.658739 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zpms9_calico-system(8b177f61-af99-4e0d-af51-f699d327434d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zpms9_calico-system(8b177f61-af99-4e0d-af51-f699d327434d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zpms9" podUID="8b177f61-af99-4e0d-af51-f699d327434d" Apr 24 23:48:23.661187 containerd[1460]: time="2026-04-24T23:48:23.661147887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c4885b55-2kpm5,Uid:c75b49fd-5383-4d1b-9dab-badcf7790241,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:23.677354 containerd[1460]: time="2026-04-24T23:48:23.677244664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-h2mjz,Uid:ab27142a-e4c1-4ad9-85f0-56df27f44b76,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:23.683967 systemd[1]: Started cri-containerd-bd94b3112cb421009f8422ba213dda6701d5ca485de9b57bb8abe1cc65101258.scope - libcontainer container bd94b3112cb421009f8422ba213dda6701d5ca485de9b57bb8abe1cc65101258. Apr 24 23:48:23.705513 containerd[1460]: time="2026-04-24T23:48:23.702047123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8djts,Uid:76788716-614e-4c65-978c-a90311cc57b5,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:23.719155 kubelet[2505]: E0424 23:48:23.718904 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:23.720569 containerd[1460]: time="2026-04-24T23:48:23.720416225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfrts,Uid:6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4,Namespace:kube-system,Attempt:0,}" Apr 24 23:48:23.745474 containerd[1460]: time="2026-04-24T23:48:23.745301769Z" level=info msg="StartContainer for \"bd94b3112cb421009f8422ba213dda6701d5ca485de9b57bb8abe1cc65101258\" returns successfully" Apr 24 23:48:23.747316 kubelet[2505]: E0424 23:48:23.742410 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:23.749083 containerd[1460]: time="2026-04-24T23:48:23.748986772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6fp8r,Uid:730a30d0-cc29-4d69-b761-41db17443064,Namespace:kube-system,Attempt:0,}" Apr 24 23:48:23.755635 containerd[1460]: time="2026-04-24T23:48:23.755612627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-696958b597-pjb8q,Uid:dc5035c7-812a-4219-b111-db043ba6addb,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:23.792627 containerd[1460]: time="2026-04-24T23:48:23.792545441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-7mztb,Uid:c44becb5-fcd6-4bd1-91f5-38af42d74697,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:23.853693 containerd[1460]: time="2026-04-24T23:48:23.853250031Z" level=error msg="Failed to destroy network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.855913 containerd[1460]: time="2026-04-24T23:48:23.854194768Z" level=error msg="Failed to destroy network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.856425 containerd[1460]: time="2026-04-24T23:48:23.856391229Z" level=error msg="encountered an error cleaning up failed sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.856724 containerd[1460]: time="2026-04-24T23:48:23.856695348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c4885b55-2kpm5,Uid:c75b49fd-5383-4d1b-9dab-badcf7790241,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.858203 containerd[1460]: time="2026-04-24T23:48:23.858141851Z" level=error msg="encountered an error cleaning up failed sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.858359 containerd[1460]: time="2026-04-24T23:48:23.858336901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8djts,Uid:76788716-614e-4c65-978c-a90311cc57b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.858592 kubelet[2505]: E0424 23:48:23.858556 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.858994 kubelet[2505]: E0424 23:48:23.858969 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-8djts" Apr 24 23:48:23.859144 kubelet[2505]: E0424 23:48:23.859080 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-8djts" Apr 24 23:48:23.859349 kubelet[2505]: E0424 23:48:23.858116 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.859551 kubelet[2505]: E0424 23:48:23.859487 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c4885b55-2kpm5" Apr 24 23:48:23.862005 kubelet[2505]: E0424 23:48:23.859673 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c4885b55-2kpm5" Apr 24 23:48:23.862393 kubelet[2505]: E0424 23:48:23.862164 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-8djts_calico-system(76788716-614e-4c65-978c-a90311cc57b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-8djts_calico-system(76788716-614e-4c65-978c-a90311cc57b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-8djts" podUID="76788716-614e-4c65-978c-a90311cc57b5" Apr 24 23:48:23.864030 kubelet[2505]: E0424 23:48:23.863135 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58c4885b55-2kpm5_calico-system(c75b49fd-5383-4d1b-9dab-badcf7790241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58c4885b55-2kpm5_calico-system(c75b49fd-5383-4d1b-9dab-badcf7790241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c4885b55-2kpm5" podUID="c75b49fd-5383-4d1b-9dab-badcf7790241" Apr 24 23:48:23.972834 containerd[1460]: time="2026-04-24T23:48:23.972350851Z" level=error msg="Failed to destroy network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.973600 containerd[1460]: time="2026-04-24T23:48:23.972982770Z" level=error msg="encountered an error cleaning up failed sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.973600 containerd[1460]: time="2026-04-24T23:48:23.973111086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-h2mjz,Uid:ab27142a-e4c1-4ad9-85f0-56df27f44b76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.974516 kubelet[2505]: E0424 23:48:23.974124 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:23.974516 kubelet[2505]: E0424 23:48:23.974299 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-647b959c57-h2mjz" Apr 24 23:48:23.974516 kubelet[2505]: E0424 23:48:23.974358 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-647b959c57-h2mjz" Apr 24 23:48:23.974820 kubelet[2505]: E0424 23:48:23.974486 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647b959c57-h2mjz_calico-system(ab27142a-e4c1-4ad9-85f0-56df27f44b76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647b959c57-h2mjz_calico-system(ab27142a-e4c1-4ad9-85f0-56df27f44b76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-647b959c57-h2mjz" podUID="ab27142a-e4c1-4ad9-85f0-56df27f44b76" Apr 24 23:48:24.019494 containerd[1460]: time="2026-04-24T23:48:24.019427536Z" level=error msg="Failed to destroy network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.019902 containerd[1460]: time="2026-04-24T23:48:24.019860723Z" level=error msg="encountered an error cleaning up failed sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.019965 containerd[1460]: time="2026-04-24T23:48:24.019943795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-696958b597-pjb8q,Uid:dc5035c7-812a-4219-b111-db043ba6addb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.020286 kubelet[2505]: E0424 23:48:24.020238 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.020335 kubelet[2505]: E0424 23:48:24.020300 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-696958b597-pjb8q" Apr 24 23:48:24.020335 kubelet[2505]: E0424 23:48:24.020317 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-696958b597-pjb8q" Apr 24 23:48:24.020444 kubelet[2505]: E0424 23:48:24.020397 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-696958b597-pjb8q_calico-system(dc5035c7-812a-4219-b111-db043ba6addb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-696958b597-pjb8q_calico-system(dc5035c7-812a-4219-b111-db043ba6addb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-696958b597-pjb8q" podUID="dc5035c7-812a-4219-b111-db043ba6addb" Apr 24 23:48:24.028127 containerd[1460]: time="2026-04-24T23:48:24.027937795Z" level=error msg="Failed to destroy network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.032657 containerd[1460]: time="2026-04-24T23:48:24.031737133Z" level=error msg="encountered an error cleaning up failed sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.032657 containerd[1460]: time="2026-04-24T23:48:24.031786075Z" level=error msg="Failed to destroy network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.032657 containerd[1460]: time="2026-04-24T23:48:24.031895120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6fp8r,Uid:730a30d0-cc29-4d69-b761-41db17443064,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.032811 kubelet[2505]: E0424 23:48:24.032562 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.032811 kubelet[2505]: E0424 23:48:24.032688 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6fp8r" Apr 24 23:48:24.032811 kubelet[2505]: E0424 23:48:24.032745 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6fp8r" Apr 24 23:48:24.033009 kubelet[2505]: E0424 23:48:24.032888 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6fp8r_kube-system(730a30d0-cc29-4d69-b761-41db17443064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6fp8r_kube-system(730a30d0-cc29-4d69-b761-41db17443064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6fp8r" podUID="730a30d0-cc29-4d69-b761-41db17443064" Apr 24 23:48:24.034088 containerd[1460]: time="2026-04-24T23:48:24.033611658Z" level=error msg="encountered an error cleaning up failed sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.034088 containerd[1460]: time="2026-04-24T23:48:24.033671069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfrts,Uid:6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.034249 kubelet[2505]: E0424 23:48:24.033879 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.034249 kubelet[2505]: E0424 23:48:24.033960 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bfrts" Apr 24 23:48:24.034249 kubelet[2505]: E0424 23:48:24.033979 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bfrts" Apr 24 23:48:24.034419 kubelet[2505]: E0424 23:48:24.034043 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bfrts_kube-system(6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bfrts_kube-system(6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bfrts" podUID="6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4" Apr 24 23:48:24.034680 containerd[1460]: time="2026-04-24T23:48:24.034621799Z" level=error msg="Failed to destroy network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.034989 containerd[1460]: time="2026-04-24T23:48:24.034960444Z" level=error msg="encountered an error cleaning up failed sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.035032 containerd[1460]: time="2026-04-24T23:48:24.035007132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-7mztb,Uid:c44becb5-fcd6-4bd1-91f5-38af42d74697,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.035227 kubelet[2505]: E0424 23:48:24.035193 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:48:24.035296 kubelet[2505]: E0424 23:48:24.035244 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-647b959c57-7mztb" Apr 24 23:48:24.035429 kubelet[2505]: E0424 23:48:24.035305 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-647b959c57-7mztb" Apr 24 23:48:24.035498 kubelet[2505]: E0424 23:48:24.035460 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647b959c57-7mztb_calico-system(c44becb5-fcd6-4bd1-91f5-38af42d74697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647b959c57-7mztb_calico-system(c44becb5-fcd6-4bd1-91f5-38af42d74697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-647b959c57-7mztb" podUID="c44becb5-fcd6-4bd1-91f5-38af42d74697" Apr 24 23:48:24.435401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668-shm.mount: Deactivated successfully. Apr 24 23:48:24.528661 kubelet[2505]: I0424 23:48:24.528549 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:24.537128 kubelet[2505]: I0424 23:48:24.537022 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:24.538302 kubelet[2505]: I0424 23:48:24.538241 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:24.543704 containerd[1460]: time="2026-04-24T23:48:24.543602753Z" level=info msg="StopPodSandbox for \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\"" Apr 24 23:48:24.548988 containerd[1460]: time="2026-04-24T23:48:24.545820743Z" level=info msg="StopPodSandbox for \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\"" Apr 24 23:48:24.548988 containerd[1460]: time="2026-04-24T23:48:24.548104963Z" level=info msg="StopPodSandbox for \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\"" Apr 24 23:48:24.549068 kubelet[2505]: I0424 23:48:24.546142 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:24.550398 containerd[1460]: time="2026-04-24T23:48:24.549661970Z" level=info msg="Ensure that sandbox 30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b in task-service has been cleanup successfully" Apr 24 23:48:24.550697 kubelet[2505]: I0424 23:48:24.550625 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:24.551847 containerd[1460]: time="2026-04-24T23:48:24.551808089Z" level=info msg="Ensure that sandbox 2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335 in task-service has been cleanup successfully" Apr 24 23:48:24.552338 containerd[1460]: time="2026-04-24T23:48:24.552289215Z" level=info msg="Ensure that sandbox d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9 in task-service has been cleanup successfully" Apr 24 23:48:24.554867 containerd[1460]: time="2026-04-24T23:48:24.554744905Z" level=info msg="StopPodSandbox for \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\"" Apr 24 23:48:24.555361 kubelet[2505]: I0424 23:48:24.555323 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:24.556793 containerd[1460]: time="2026-04-24T23:48:24.556748698Z" level=info msg="Ensure that sandbox 6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9 in task-service has been cleanup successfully" Apr 24 23:48:24.557399 containerd[1460]: time="2026-04-24T23:48:24.557354655Z" level=info msg="StopPodSandbox for \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\"" Apr 24 23:48:24.557554 containerd[1460]: time="2026-04-24T23:48:24.557510784Z" level=info msg="Ensure that sandbox f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668 in task-service has been cleanup successfully" Apr 24 23:48:24.558422 containerd[1460]: time="2026-04-24T23:48:24.558098933Z" level=info msg="StopPodSandbox for \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\"" Apr 24 23:48:24.558422 containerd[1460]: time="2026-04-24T23:48:24.558220600Z" level=info msg="Ensure that sandbox 43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467 in task-service has been cleanup successfully" Apr 24 23:48:24.600994 kubelet[2505]: I0424 23:48:24.600336 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:24.613802 containerd[1460]: time="2026-04-24T23:48:24.613686141Z" level=info msg="StopPodSandbox for \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\"" Apr 24 23:48:24.614426 containerd[1460]: time="2026-04-24T23:48:24.614091323Z" level=info msg="Ensure that sandbox 31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3 in task-service has been cleanup successfully" Apr 24 23:48:24.622189 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). Apr 24 23:48:24.641108 kubelet[2505]: I0424 23:48:24.640065 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8dlbs" podStartSLOduration=3.4156464189999998 podStartE2EDuration="21.639997126s" podCreationTimestamp="2026-04-24 23:48:03 +0000 UTC" firstStartedPulling="2026-04-24 23:48:03.982897776 +0000 UTC m=+17.823824860" lastFinishedPulling="2026-04-24 23:48:22.207248486 +0000 UTC m=+36.048175567" observedRunningTime="2026-04-24 23:48:24.639860558 +0000 UTC m=+38.480787641" watchObservedRunningTime="2026-04-24 23:48:24.639997126 +0000 UTC m=+38.480924206" Apr 24 23:48:24.673107 kubelet[2505]: I0424 23:48:24.672938 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:24.675598 containerd[1460]: time="2026-04-24T23:48:24.674945064Z" level=info msg="StopPodSandbox for \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\"" Apr 24 23:48:24.675598 containerd[1460]: time="2026-04-24T23:48:24.675283201Z" level=info msg="Ensure that sandbox b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b in task-service has been cleanup successfully" Apr 24 23:48:24.713370 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:24.714007 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:24.727344 systemd-logind[1438]: New session 8 of user core. Apr 24 23:48:24.731962 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:48:25.041187 sshd[3862]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:25.045205 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:59654.service: Deactivated successfully. Apr 24 23:48:25.054200 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:48:25.057896 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:48:25.059741 systemd-logind[1438]: Removed session 8. Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:24.857 [INFO][3847] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:24.857 [INFO][3847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" iface="eth0" netns="/var/run/netns/cni-fa7772dd-fb06-5b02-166a-f9da1c3c7daf" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:24.909 [INFO][3847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" iface="eth0" netns="/var/run/netns/cni-fa7772dd-fb06-5b02-166a-f9da1c3c7daf" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:24.954 [INFO][3847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" iface="eth0" netns="/var/run/netns/cni-fa7772dd-fb06-5b02-166a-f9da1c3c7daf" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:24.954 [INFO][3847] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:24.954 [INFO][3847] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.078 [INFO][3988] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.078 [INFO][3988] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.078 [INFO][3988] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.122 [WARNING][3988] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.123 [INFO][3988] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.125 [INFO][3988] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.138833 containerd[1460]: 2026-04-24 23:48:25.133 [INFO][3847] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:25.145661 containerd[1460]: time="2026-04-24T23:48:25.145456406Z" level=info msg="TearDown network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\" successfully" Apr 24 23:48:25.145976 containerd[1460]: time="2026-04-24T23:48:25.145959747Z" level=info msg="StopPodSandbox for \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\" returns successfully" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:24.846 [INFO][3860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:24.847 [INFO][3860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" iface="eth0" netns="/var/run/netns/cni-5d310e8a-1573-26b8-26d9-395fa5fd556b" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:24.847 [INFO][3860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" iface="eth0" netns="/var/run/netns/cni-5d310e8a-1573-26b8-26d9-395fa5fd556b" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:24.855 [INFO][3860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" iface="eth0" netns="/var/run/netns/cni-5d310e8a-1573-26b8-26d9-395fa5fd556b" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:24.855 [INFO][3860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:24.855 [INFO][3860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.082 [INFO][3975] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.082 [INFO][3975] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.128 [INFO][3975] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.143 [WARNING][3975] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.143 [INFO][3975] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.145 [INFO][3975] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.152121 containerd[1460]: 2026-04-24 23:48:25.148 [INFO][3860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:25.152698 containerd[1460]: time="2026-04-24T23:48:25.152204996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8djts,Uid:76788716-614e-4c65-978c-a90311cc57b5,Namespace:calico-system,Attempt:1,}" Apr 24 23:48:25.152698 containerd[1460]: time="2026-04-24T23:48:25.152368317Z" level=info msg="TearDown network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\" successfully" Apr 24 23:48:25.152698 containerd[1460]: time="2026-04-24T23:48:25.152406417Z" level=info msg="StopPodSandbox for \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\" returns successfully" Apr 24 23:48:25.153367 systemd[1]: run-netns-cni\x2dfa7772dd\x2dfb06\x2d5b02\x2d166a\x2df9da1c3c7daf.mount: Deactivated successfully. Apr 24 23:48:25.155456 containerd[1460]: time="2026-04-24T23:48:25.155187684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-7mztb,Uid:c44becb5-fcd6-4bd1-91f5-38af42d74697,Namespace:calico-system,Attempt:1,}" Apr 24 23:48:25.155460 systemd[1]: run-netns-cni\x2d5d310e8a\x2d1573\x2d26b8\x2d26d9\x2d395fa5fd556b.mount: Deactivated successfully. Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:24.901 [INFO][3924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:24.902 [INFO][3924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" iface="eth0" netns="/var/run/netns/cni-602901db-84d4-0c7a-34dc-a2401e285a8c" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:24.968 [INFO][3924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" iface="eth0" netns="/var/run/netns/cni-602901db-84d4-0c7a-34dc-a2401e285a8c" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:24.976 [INFO][3924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" iface="eth0" netns="/var/run/netns/cni-602901db-84d4-0c7a-34dc-a2401e285a8c" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:24.976 [INFO][3924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:24.976 [INFO][3924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.079 [INFO][3995] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.082 [INFO][3995] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.145 [INFO][3995] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.153 [WARNING][3995] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.153 [INFO][3995] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.154 [INFO][3995] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.179287 containerd[1460]: 2026-04-24 23:48:25.164 [INFO][3924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:25.183924 containerd[1460]: time="2026-04-24T23:48:25.182792818Z" level=info msg="TearDown network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\" successfully" Apr 24 23:48:25.183924 containerd[1460]: time="2026-04-24T23:48:25.182858202Z" level=info msg="StopPodSandbox for \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\" returns successfully" Apr 24 23:48:25.184248 systemd[1]: run-netns-cni\x2d602901db\x2d84d4\x2d0c7a\x2d34dc\x2da2401e285a8c.mount: Deactivated successfully. Apr 24 23:48:25.186196 kubelet[2505]: E0424 23:48:25.185738 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:25.191812 containerd[1460]: time="2026-04-24T23:48:25.191235640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfrts,Uid:6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4,Namespace:kube-system,Attempt:1,}" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.012 [INFO][3853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.027 [INFO][3853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" iface="eth0" netns="/var/run/netns/cni-898143aa-ce49-6606-9962-a2b407039712" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.031 [INFO][3853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" iface="eth0" netns="/var/run/netns/cni-898143aa-ce49-6606-9962-a2b407039712" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.031 [INFO][3853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" iface="eth0" netns="/var/run/netns/cni-898143aa-ce49-6606-9962-a2b407039712" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.031 [INFO][3853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.031 [INFO][3853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.120 [INFO][4010] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.120 [INFO][4010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.155 [INFO][4010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.174 [WARNING][4010] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.175 [INFO][4010] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.184 [INFO][4010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.194273 containerd[1460]: 2026-04-24 23:48:25.187 [INFO][3853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:25.198203 containerd[1460]: time="2026-04-24T23:48:25.198057303Z" level=info msg="TearDown network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\" successfully" Apr 24 23:48:25.199570 containerd[1460]: time="2026-04-24T23:48:25.199321496Z" level=info msg="StopPodSandbox for \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\" returns successfully" Apr 24 23:48:25.208527 containerd[1460]: time="2026-04-24T23:48:25.208410303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-h2mjz,Uid:ab27142a-e4c1-4ad9-85f0-56df27f44b76,Namespace:calico-system,Attempt:1,}" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:24.823 [INFO][3854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:24.823 [INFO][3854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" iface="eth0" netns="/var/run/netns/cni-f1ce8e23-dabf-7a2c-1c20-6ebdfdc857be" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:24.825 [INFO][3854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" iface="eth0" netns="/var/run/netns/cni-f1ce8e23-dabf-7a2c-1c20-6ebdfdc857be" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:24.837 [INFO][3854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" iface="eth0" netns="/var/run/netns/cni-f1ce8e23-dabf-7a2c-1c20-6ebdfdc857be" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:24.838 [INFO][3854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:24.838 [INFO][3854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.155 [INFO][3972] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.155 [INFO][3972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.184 [INFO][3972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.225 [WARNING][3972] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.225 [INFO][3972] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.230 [INFO][3972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.244236 containerd[1460]: 2026-04-24 23:48:25.233 [INFO][3854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:25.247827 containerd[1460]: time="2026-04-24T23:48:25.247685454Z" level=info msg="TearDown network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\" successfully" Apr 24 23:48:25.247827 containerd[1460]: time="2026-04-24T23:48:25.247810401Z" level=info msg="StopPodSandbox for \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\" returns successfully" Apr 24 23:48:25.252012 containerd[1460]: time="2026-04-24T23:48:25.251950083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c4885b55-2kpm5,Uid:c75b49fd-5383-4d1b-9dab-badcf7790241,Namespace:calico-system,Attempt:1,}" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:24.919 [INFO][3899] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:24.919 [INFO][3899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" iface="eth0" netns="/var/run/netns/cni-1cac9c28-afc3-9d83-3725-27255ec3eea1" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:24.922 [INFO][3899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" iface="eth0" netns="/var/run/netns/cni-1cac9c28-afc3-9d83-3725-27255ec3eea1" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:24.978 [INFO][3899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" iface="eth0" netns="/var/run/netns/cni-1cac9c28-afc3-9d83-3725-27255ec3eea1" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:24.979 [INFO][3899] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:24.979 [INFO][3899] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.147 [INFO][4000] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.157 [INFO][4000] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.230 [INFO][4000] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.247 [WARNING][4000] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.247 [INFO][4000] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.257 [INFO][4000] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.276637 containerd[1460]: 2026-04-24 23:48:25.267 [INFO][3899] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:25.277266 containerd[1460]: time="2026-04-24T23:48:25.276980843Z" level=info msg="TearDown network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\" successfully" Apr 24 23:48:25.277266 containerd[1460]: time="2026-04-24T23:48:25.277008523Z" level=info msg="StopPodSandbox for \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\" returns successfully" Apr 24 23:48:25.281021 containerd[1460]: time="2026-04-24T23:48:25.280993527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpms9,Uid:8b177f61-af99-4e0d-af51-f699d327434d,Namespace:calico-system,Attempt:1,}" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.118 [INFO][3905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.120 [INFO][3905] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" iface="eth0" netns="/var/run/netns/cni-046fe6fa-6d5d-b3b7-5858-0d0b69db5aa6" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.120 [INFO][3905] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" iface="eth0" netns="/var/run/netns/cni-046fe6fa-6d5d-b3b7-5858-0d0b69db5aa6" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.121 [INFO][3905] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" iface="eth0" netns="/var/run/netns/cni-046fe6fa-6d5d-b3b7-5858-0d0b69db5aa6" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.121 [INFO][3905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.121 [INFO][3905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.166 [INFO][4030] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.173 [INFO][4030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.258 [INFO][4030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.274 [WARNING][4030] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.274 [INFO][4030] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.289 [INFO][4030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.313447 containerd[1460]: 2026-04-24 23:48:25.297 [INFO][3905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:25.321015 containerd[1460]: time="2026-04-24T23:48:25.316709835Z" level=info msg="TearDown network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\" successfully" Apr 24 23:48:25.321015 containerd[1460]: time="2026-04-24T23:48:25.316974527Z" level=info msg="StopPodSandbox for \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\" returns successfully" Apr 24 23:48:25.321065 kubelet[2505]: E0424 23:48:25.318998 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:25.321756 containerd[1460]: time="2026-04-24T23:48:25.321691666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6fp8r,Uid:730a30d0-cc29-4d69-b761-41db17443064,Namespace:kube-system,Attempt:1,}" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.079 [INFO][3859] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.112 [INFO][3859] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" iface="eth0" netns="/var/run/netns/cni-ab653b03-0b1d-0cc9-0671-f89a32629b5a" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.120 [INFO][3859] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" iface="eth0" netns="/var/run/netns/cni-ab653b03-0b1d-0cc9-0671-f89a32629b5a" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.121 [INFO][3859] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" iface="eth0" netns="/var/run/netns/cni-ab653b03-0b1d-0cc9-0671-f89a32629b5a" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.121 [INFO][3859] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.121 [INFO][3859] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.241 [INFO][4036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.245 [INFO][4036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.292 [INFO][4036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.331 [WARNING][4036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.331 [INFO][4036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.337 [INFO][4036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.361875 containerd[1460]: 2026-04-24 23:48:25.348 [INFO][3859] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:25.361875 containerd[1460]: time="2026-04-24T23:48:25.362011264Z" level=info msg="TearDown network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\" successfully" Apr 24 23:48:25.362591 containerd[1460]: time="2026-04-24T23:48:25.362080676Z" level=info msg="StopPodSandbox for \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\" returns successfully" Apr 24 23:48:25.432239 systemd[1]: run-netns-cni\x2dab653b03\x2d0b1d\x2d0cc9\x2d0671\x2df89a32629b5a.mount: Deactivated successfully. Apr 24 23:48:25.432628 systemd[1]: run-netns-cni\x2d046fe6fa\x2d6d5d\x2db3b7\x2d5858\x2d0d0b69db5aa6.mount: Deactivated successfully. Apr 24 23:48:25.433024 systemd[1]: run-netns-cni\x2d898143aa\x2dce49\x2d6606\x2d9962\x2da2b407039712.mount: Deactivated successfully. Apr 24 23:48:25.433073 systemd[1]: run-netns-cni\x2df1ce8e23\x2ddabf\x2d7a2c\x2d1c20\x2d6ebdfdc857be.mount: Deactivated successfully. Apr 24 23:48:25.433124 systemd[1]: run-netns-cni\x2d1cac9c28\x2dafc3\x2d9d83\x2d3725\x2d27255ec3eea1.mount: Deactivated successfully. Apr 24 23:48:25.498940 kubelet[2505]: I0424 23:48:25.497716 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-nginx-config\") pod \"dc5035c7-812a-4219-b111-db043ba6addb\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " Apr 24 23:48:25.498940 kubelet[2505]: I0424 23:48:25.497826 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-whisker-ca-bundle\") pod \"dc5035c7-812a-4219-b111-db043ba6addb\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " Apr 24 23:48:25.498940 kubelet[2505]: I0424 23:48:25.497886 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5035c7-812a-4219-b111-db043ba6addb-whisker-backend-key-pair\") pod \"dc5035c7-812a-4219-b111-db043ba6addb\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " Apr 24 23:48:25.498940 kubelet[2505]: I0424 23:48:25.497911 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnxfb\" (UniqueName: \"kubernetes.io/projected/dc5035c7-812a-4219-b111-db043ba6addb-kube-api-access-qnxfb\") pod \"dc5035c7-812a-4219-b111-db043ba6addb\" (UID: \"dc5035c7-812a-4219-b111-db043ba6addb\") " Apr 24 23:48:25.499737 kubelet[2505]: I0424 23:48:25.499358 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dc5035c7-812a-4219-b111-db043ba6addb" (UID: "dc5035c7-812a-4219-b111-db043ba6addb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:48:25.499761 kubelet[2505]: I0424 23:48:25.499737 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "dc5035c7-812a-4219-b111-db043ba6addb" (UID: "dc5035c7-812a-4219-b111-db043ba6addb"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:48:25.512039 kubelet[2505]: I0424 23:48:25.511139 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5035c7-812a-4219-b111-db043ba6addb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dc5035c7-812a-4219-b111-db043ba6addb" (UID: "dc5035c7-812a-4219-b111-db043ba6addb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:48:25.512544 kubelet[2505]: I0424 23:48:25.512333 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5035c7-812a-4219-b111-db043ba6addb-kube-api-access-qnxfb" (OuterVolumeSpecName: "kube-api-access-qnxfb") pod "dc5035c7-812a-4219-b111-db043ba6addb" (UID: "dc5035c7-812a-4219-b111-db043ba6addb"). InnerVolumeSpecName "kube-api-access-qnxfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:48:25.513375 systemd[1]: var-lib-kubelet-pods-dc5035c7\x2d812a\x2d4219\x2db111\x2ddb043ba6addb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqnxfb.mount: Deactivated successfully. Apr 24 23:48:25.513519 systemd[1]: var-lib-kubelet-pods-dc5035c7\x2d812a\x2d4219\x2db111\x2ddb043ba6addb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:48:25.533193 systemd-networkd[1391]: cali8f135889d78: Link UP Apr 24 23:48:25.533629 systemd-networkd[1391]: cali8f135889d78: Gained carrier Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.324 [ERROR][4060] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.355 [INFO][4060] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bfrts-eth0 coredns-674b8bbfcf- kube-system 6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4 941 0 2026-04-24 23:47:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bfrts eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f135889d78 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.356 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.442 [INFO][4141] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" HandleID="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.453 [INFO][4141] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" HandleID="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c9af0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bfrts", "timestamp":"2026-04-24 23:48:25.442060688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002106e0)} Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.453 [INFO][4141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.453 [INFO][4141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.453 [INFO][4141] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.456 [INFO][4141] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.461 [INFO][4141] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.468 [INFO][4141] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.470 [INFO][4141] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.472 [INFO][4141] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.472 [INFO][4141] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.474 [INFO][4141] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9 Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.482 [INFO][4141] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.492 [INFO][4141] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.492 [INFO][4141] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" host="localhost" Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.492 [INFO][4141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.557137 containerd[1460]: 2026-04-24 23:48:25.493 [INFO][4141] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" HandleID="k8s-pod-network.f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.557830 containerd[1460]: 2026-04-24 23:48:25.505 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bfrts-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bfrts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f135889d78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:25.557830 containerd[1460]: 2026-04-24 23:48:25.506 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.557830 containerd[1460]: 2026-04-24 23:48:25.506 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f135889d78 ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.557830 containerd[1460]: 2026-04-24 23:48:25.534 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.557830 containerd[1460]: 2026-04-24 23:48:25.534 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bfrts-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9", Pod:"coredns-674b8bbfcf-bfrts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f135889d78", MAC:"4e:d0:c2:35:3d:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:25.557830 containerd[1460]: 2026-04-24 23:48:25.548 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfrts" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:25.600934 kubelet[2505]: I0424 23:48:25.600432 2505 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 24 23:48:25.602270 kubelet[2505]: I0424 23:48:25.602143 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc5035c7-812a-4219-b111-db043ba6addb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 24 23:48:25.602270 kubelet[2505]: I0424 23:48:25.602194 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5035c7-812a-4219-b111-db043ba6addb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 24 23:48:25.602270 kubelet[2505]: I0424 23:48:25.602205 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qnxfb\" (UniqueName: \"kubernetes.io/projected/dc5035c7-812a-4219-b111-db043ba6addb-kube-api-access-qnxfb\") on node \"localhost\" DevicePath \"\"" Apr 24 23:48:25.610714 containerd[1460]: time="2026-04-24T23:48:25.610304372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:25.610714 containerd[1460]: time="2026-04-24T23:48:25.610496341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:25.610714 containerd[1460]: time="2026-04-24T23:48:25.610506670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:25.611861 containerd[1460]: time="2026-04-24T23:48:25.610701726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:25.620254 systemd-networkd[1391]: cali9dd4368cc57: Link UP Apr 24 23:48:25.621041 systemd-networkd[1391]: cali9dd4368cc57: Gained carrier Apr 24 23:48:25.642316 systemd[1]: Started cri-containerd-f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9.scope - libcontainer container f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9. Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.342 [ERROR][4086] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.367 [INFO][4086] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0 calico-apiserver-647b959c57- calico-system ab27142a-e4c1-4ad9-85f0-56df27f44b76 945 0 2026-04-24 23:48:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647b959c57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647b959c57-h2mjz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9dd4368cc57 [] [] }} ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.367 [INFO][4086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.437 [INFO][4144] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" HandleID="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.455 [INFO][4144] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" HandleID="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-647b959c57-h2mjz", "timestamp":"2026-04-24 23:48:25.437561536 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d51e0)} Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.455 [INFO][4144] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.493 [INFO][4144] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.494 [INFO][4144] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.556 [INFO][4144] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.562 [INFO][4144] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.566 [INFO][4144] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.568 [INFO][4144] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.570 [INFO][4144] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.570 [INFO][4144] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.576 [INFO][4144] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18 Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.584 [INFO][4144] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.606 [INFO][4144] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.606 [INFO][4144] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" host="localhost" Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.607 [INFO][4144] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:25.645696 containerd[1460]: 2026-04-24 23:48:25.607 [INFO][4144] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" HandleID="k8s-pod-network.90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.646172 containerd[1460]: 2026-04-24 23:48:25.613 [INFO][4086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"ab27142a-e4c1-4ad9-85f0-56df27f44b76", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647b959c57-h2mjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9dd4368cc57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:25.646172 containerd[1460]: 2026-04-24 23:48:25.614 [INFO][4086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.646172 containerd[1460]: 2026-04-24 23:48:25.614 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dd4368cc57 ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.646172 containerd[1460]: 2026-04-24 23:48:25.622 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.646172 containerd[1460]: 2026-04-24 23:48:25.626 [INFO][4086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"ab27142a-e4c1-4ad9-85f0-56df27f44b76", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18", Pod:"calico-apiserver-647b959c57-h2mjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9dd4368cc57", MAC:"2a:8a:d2:f2:ed:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:25.646172 containerd[1460]: 2026-04-24 23:48:25.642 [INFO][4086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18" Namespace="calico-system" Pod="calico-apiserver-647b959c57-h2mjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:25.668130 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:25.700939 systemd[1]: Removed slice kubepods-besteffort-poddc5035c7_812a_4219_b111_db043ba6addb.slice - libcontainer container kubepods-besteffort-poddc5035c7_812a_4219_b111_db043ba6addb.slice. Apr 24 23:48:25.733658 containerd[1460]: time="2026-04-24T23:48:25.726504402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:25.733658 containerd[1460]: time="2026-04-24T23:48:25.726616619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:25.733658 containerd[1460]: time="2026-04-24T23:48:25.726636134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:25.733658 containerd[1460]: time="2026-04-24T23:48:25.727404008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:25.750478 containerd[1460]: time="2026-04-24T23:48:25.750420285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfrts,Uid:6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4,Namespace:kube-system,Attempt:1,} returns sandbox id \"f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9\"" Apr 24 23:48:25.752379 kubelet[2505]: E0424 23:48:25.752337 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:25.766679 containerd[1460]: time="2026-04-24T23:48:25.766467088Z" level=info msg="CreateContainer within sandbox \"f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:48:25.792129 systemd[1]: Started cri-containerd-90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18.scope - libcontainer container 90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18. Apr 24 23:48:25.840833 systemd[1]: Created slice kubepods-besteffort-podab9fbc96_b40f_4822_b1a2_7020989db95d.slice - libcontainer container kubepods-besteffort-podab9fbc96_b40f_4822_b1a2_7020989db95d.slice. Apr 24 23:48:25.844499 containerd[1460]: time="2026-04-24T23:48:25.842178187Z" level=info msg="CreateContainer within sandbox \"f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3a319ead38d26ff356ed497b1b63281d2bdc815883d6f6b31abd09e88f43f78\"" Apr 24 23:48:25.847857 containerd[1460]: time="2026-04-24T23:48:25.846878847Z" level=info msg="StartContainer for \"b3a319ead38d26ff356ed497b1b63281d2bdc815883d6f6b31abd09e88f43f78\"" Apr 24 23:48:25.962647 kubelet[2505]: I0424 23:48:25.962483 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9fbc96-b40f-4822-b1a2-7020989db95d-whisker-ca-bundle\") pod \"whisker-8bf6f76b-6k2b6\" (UID: \"ab9fbc96-b40f-4822-b1a2-7020989db95d\") " pod="calico-system/whisker-8bf6f76b-6k2b6" Apr 24 23:48:25.962647 kubelet[2505]: I0424 23:48:25.962546 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ab9fbc96-b40f-4822-b1a2-7020989db95d-nginx-config\") pod \"whisker-8bf6f76b-6k2b6\" (UID: \"ab9fbc96-b40f-4822-b1a2-7020989db95d\") " pod="calico-system/whisker-8bf6f76b-6k2b6" Apr 24 23:48:25.962647 kubelet[2505]: I0424 23:48:25.962581 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429xk\" (UniqueName: \"kubernetes.io/projected/ab9fbc96-b40f-4822-b1a2-7020989db95d-kube-api-access-429xk\") pod \"whisker-8bf6f76b-6k2b6\" (UID: \"ab9fbc96-b40f-4822-b1a2-7020989db95d\") " pod="calico-system/whisker-8bf6f76b-6k2b6" Apr 24 23:48:25.962647 kubelet[2505]: I0424 23:48:25.962625 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab9fbc96-b40f-4822-b1a2-7020989db95d-whisker-backend-key-pair\") pod \"whisker-8bf6f76b-6k2b6\" (UID: \"ab9fbc96-b40f-4822-b1a2-7020989db95d\") " pod="calico-system/whisker-8bf6f76b-6k2b6" Apr 24 23:48:25.963015 systemd[1]: Started cri-containerd-b3a319ead38d26ff356ed497b1b63281d2bdc815883d6f6b31abd09e88f43f78.scope - libcontainer container b3a319ead38d26ff356ed497b1b63281d2bdc815883d6f6b31abd09e88f43f78. Apr 24 23:48:25.975084 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:25.977624 systemd-networkd[1391]: cali4a6f21c604e: Link UP Apr 24 23:48:25.978906 systemd-networkd[1391]: cali4a6f21c604e: Gained carrier Apr 24 23:48:26.041572 containerd[1460]: time="2026-04-24T23:48:26.041411646Z" level=info msg="StartContainer for \"b3a319ead38d26ff356ed497b1b63281d2bdc815883d6f6b31abd09e88f43f78\" returns successfully" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.386 [ERROR][4115] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.416 [INFO][4115] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zpms9-eth0 csi-node-driver- calico-system 8b177f61-af99-4e0d-af51-f699d327434d 943 0 2026-04-24 23:48:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zpms9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4a6f21c604e [] [] }} ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.416 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.518 [INFO][4173] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" HandleID="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.527 [INFO][4173] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" HandleID="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000489170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zpms9", "timestamp":"2026-04-24 23:48:25.518570922 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00076c000)} Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.527 [INFO][4173] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.608 [INFO][4173] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.609 [INFO][4173] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.658 [INFO][4173] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.701 [INFO][4173] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.749 [INFO][4173] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.754 [INFO][4173] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.807 [INFO][4173] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.833 [INFO][4173] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.859 [INFO][4173] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645 Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.887 [INFO][4173] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.896 [INFO][4173] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.897 [INFO][4173] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" host="localhost" Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.901 [INFO][4173] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:26.044224 containerd[1460]: 2026-04-24 23:48:25.964 [INFO][4173] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" HandleID="k8s-pod-network.9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.044723 containerd[1460]: 2026-04-24 23:48:25.972 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zpms9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b177f61-af99-4e0d-af51-f699d327434d", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zpms9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6f21c604e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.044723 containerd[1460]: 2026-04-24 23:48:25.973 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.044723 containerd[1460]: 2026-04-24 23:48:25.973 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a6f21c604e ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.044723 containerd[1460]: 2026-04-24 23:48:26.015 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.044723 containerd[1460]: 2026-04-24 23:48:26.016 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zpms9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b177f61-af99-4e0d-af51-f699d327434d", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645", Pod:"csi-node-driver-zpms9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6f21c604e", MAC:"ae:7c:73:c4:8f:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.044723 containerd[1460]: 2026-04-24 23:48:26.040 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645" Namespace="calico-system" Pod="csi-node-driver-zpms9" WorkloadEndpoint="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:26.161656 containerd[1460]: time="2026-04-24T23:48:26.160892218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bf6f76b-6k2b6,Uid:ab9fbc96-b40f-4822-b1a2-7020989db95d,Namespace:calico-system,Attempt:0,}" Apr 24 23:48:26.168199 containerd[1460]: time="2026-04-24T23:48:26.166095491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:26.168199 containerd[1460]: time="2026-04-24T23:48:26.166169746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:26.168199 containerd[1460]: time="2026-04-24T23:48:26.166181439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.168199 containerd[1460]: time="2026-04-24T23:48:26.166325456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.220013 containerd[1460]: time="2026-04-24T23:48:26.219109973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-h2mjz,Uid:ab27142a-e4c1-4ad9-85f0-56df27f44b76,Namespace:calico-system,Attempt:1,} returns sandbox id \"90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18\"" Apr 24 23:48:26.223382 containerd[1460]: time="2026-04-24T23:48:26.223340151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:48:26.257994 systemd[1]: Started cri-containerd-9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645.scope - libcontainer container 9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645. Apr 24 23:48:26.269155 systemd-networkd[1391]: cali8552979bafc: Link UP Apr 24 23:48:26.270918 systemd-networkd[1391]: cali8552979bafc: Gained carrier Apr 24 23:48:26.290032 kubelet[2505]: I0424 23:48:26.287737 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5035c7-812a-4219-b111-db043ba6addb" path="/var/lib/kubelet/pods/dc5035c7-812a-4219-b111-db043ba6addb/volumes" Apr 24 23:48:26.348686 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.376 [ERROR][4068] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.410 [INFO][4068] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0 calico-apiserver-647b959c57- calico-system c44becb5-fcd6-4bd1-91f5-38af42d74697 939 0 2026-04-24 23:48:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647b959c57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647b959c57-7mztb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8552979bafc [] [] }} ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.415 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.509 [INFO][4168] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" HandleID="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.529 [INFO][4168] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" HandleID="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c7360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-647b959c57-7mztb", "timestamp":"2026-04-24 23:48:25.509624303 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000326000)} Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.529 [INFO][4168] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.900 [INFO][4168] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:25.920 [INFO][4168] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.001 [INFO][4168] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.039 [INFO][4168] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.069 [INFO][4168] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.086 [INFO][4168] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.110 [INFO][4168] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.116 [INFO][4168] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.172 [INFO][4168] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2 Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.218 [INFO][4168] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.249 [INFO][4168] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.250 [INFO][4168] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" host="localhost" Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.250 [INFO][4168] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:26.349703 containerd[1460]: 2026-04-24 23:48:26.250 [INFO][4168] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" HandleID="k8s-pod-network.39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.350300 containerd[1460]: 2026-04-24 23:48:26.255 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"c44becb5-fcd6-4bd1-91f5-38af42d74697", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647b959c57-7mztb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8552979bafc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.350300 containerd[1460]: 2026-04-24 23:48:26.259 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.350300 containerd[1460]: 2026-04-24 23:48:26.260 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8552979bafc ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.350300 containerd[1460]: 2026-04-24 23:48:26.287 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.350300 containerd[1460]: 2026-04-24 23:48:26.288 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"c44becb5-fcd6-4bd1-91f5-38af42d74697", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2", Pod:"calico-apiserver-647b959c57-7mztb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8552979bafc", MAC:"e6:e2:78:aa:d6:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.350300 containerd[1460]: 2026-04-24 23:48:26.327 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2" Namespace="calico-system" Pod="calico-apiserver-647b959c57-7mztb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:26.542571 containerd[1460]: time="2026-04-24T23:48:26.542151303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpms9,Uid:8b177f61-af99-4e0d-af51-f699d327434d,Namespace:calico-system,Attempt:1,} returns sandbox id \"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645\"" Apr 24 23:48:26.573937 containerd[1460]: time="2026-04-24T23:48:26.573079321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:26.573937 containerd[1460]: time="2026-04-24T23:48:26.573220182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:26.573937 containerd[1460]: time="2026-04-24T23:48:26.573230690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.573937 containerd[1460]: time="2026-04-24T23:48:26.573322277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.607477 systemd-networkd[1391]: cali96875bcbaa5: Link UP Apr 24 23:48:26.610065 systemd-networkd[1391]: cali96875bcbaa5: Gained carrier Apr 24 23:48:26.634974 systemd[1]: Started cri-containerd-39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2.scope - libcontainer container 39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2. Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:25.397 [ERROR][4049] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:25.421 [INFO][4049] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--8djts-eth0 goldmane-5b85766d88- calico-system 76788716-614e-4c65-978c-a90311cc57b5 940 0 2026-04-24 23:48:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-8djts eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali96875bcbaa5 [] [] }} ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:25.421 [INFO][4049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:25.517 [INFO][4163] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" HandleID="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:25.529 [INFO][4163] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" HandleID="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-8djts", "timestamp":"2026-04-24 23:48:25.517951653 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f7080)} Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:25.529 [INFO][4163] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.251 [INFO][4163] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.251 [INFO][4163] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.260 [INFO][4163] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.328 [INFO][4163] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.351 [INFO][4163] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.361 [INFO][4163] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.385 [INFO][4163] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.475 [INFO][4163] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.497 [INFO][4163] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638 Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.513 [INFO][4163] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.555 [INFO][4163] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.557 [INFO][4163] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" host="localhost" Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.557 [INFO][4163] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:26.638989 containerd[1460]: 2026-04-24 23:48:26.558 [INFO][4163] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" HandleID="k8s-pod-network.d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.639854 containerd[1460]: 2026-04-24 23:48:26.564 [INFO][4049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--8djts-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76788716-614e-4c65-978c-a90311cc57b5", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-8djts", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96875bcbaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.639854 containerd[1460]: 2026-04-24 23:48:26.567 [INFO][4049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.639854 containerd[1460]: 2026-04-24 23:48:26.567 [INFO][4049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96875bcbaa5 ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.639854 containerd[1460]: 2026-04-24 23:48:26.612 [INFO][4049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.639854 containerd[1460]: 2026-04-24 23:48:26.614 [INFO][4049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--8djts-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76788716-614e-4c65-978c-a90311cc57b5", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638", Pod:"goldmane-5b85766d88-8djts", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96875bcbaa5", MAC:"16:f2:e5:4e:76:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.639854 containerd[1460]: 2026-04-24 23:48:26.636 [INFO][4049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638" Namespace="calico-system" Pod="goldmane-5b85766d88-8djts" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:26.679466 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:26.700952 systemd-networkd[1391]: cali5dd4fb62c59: Link UP Apr 24 23:48:26.721345 containerd[1460]: time="2026-04-24T23:48:26.717613220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:26.721345 containerd[1460]: time="2026-04-24T23:48:26.717880672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:26.721345 containerd[1460]: time="2026-04-24T23:48:26.717896377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.721345 containerd[1460]: time="2026-04-24T23:48:26.718007038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.722830 systemd-networkd[1391]: cali5dd4fb62c59: Gained carrier Apr 24 23:48:26.732262 kubelet[2505]: E0424 23:48:26.732216 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:25.437 [ERROR][4129] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:25.461 [INFO][4129] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0 coredns-674b8bbfcf- kube-system 730a30d0-cc29-4d69-b761-41db17443064 947 0 2026-04-24 23:47:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6fp8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5dd4fb62c59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:25.461 [INFO][4129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:25.548 [INFO][4189] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" HandleID="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:25.555 [INFO][4189] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" HandleID="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd9d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6fp8r", "timestamp":"2026-04-24 23:48:25.548185592 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040b1e0)} Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:25.555 [INFO][4189] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.563 [INFO][4189] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.565 [INFO][4189] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.585 [INFO][4189] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.615 [INFO][4189] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.624 [INFO][4189] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.631 [INFO][4189] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.638 [INFO][4189] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.638 [INFO][4189] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.642 [INFO][4189] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579 Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.651 [INFO][4189] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.669 [INFO][4189] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.669 [INFO][4189] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" host="localhost" Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.669 [INFO][4189] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:26.760089 containerd[1460]: 2026-04-24 23:48:26.669 [INFO][4189] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" HandleID="k8s-pod-network.32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.765157 containerd[1460]: 2026-04-24 23:48:26.681 [INFO][4129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"730a30d0-cc29-4d69-b761-41db17443064", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6fp8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dd4fb62c59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.765157 containerd[1460]: 2026-04-24 23:48:26.683 [INFO][4129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.765157 containerd[1460]: 2026-04-24 23:48:26.684 [INFO][4129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dd4fb62c59 ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.765157 containerd[1460]: 2026-04-24 23:48:26.722 [INFO][4129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.765157 containerd[1460]: 2026-04-24 23:48:26.724 [INFO][4129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"730a30d0-cc29-4d69-b761-41db17443064", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579", Pod:"coredns-674b8bbfcf-6fp8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dd4fb62c59", MAC:"82:46:55:7f:d2:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.765157 containerd[1460]: 2026-04-24 23:48:26.754 [INFO][4129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579" Namespace="kube-system" Pod="coredns-674b8bbfcf-6fp8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:26.772170 kubelet[2505]: I0424 23:48:26.772058 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bfrts" podStartSLOduration=34.772004279 podStartE2EDuration="34.772004279s" podCreationTimestamp="2026-04-24 23:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:48:26.765815569 +0000 UTC m=+40.606742655" watchObservedRunningTime="2026-04-24 23:48:26.772004279 +0000 UTC m=+40.612931442" Apr 24 23:48:26.797975 systemd[1]: Started cri-containerd-d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638.scope - libcontainer container d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638. Apr 24 23:48:26.802993 systemd-networkd[1391]: cali9dd4368cc57: Gained IPv6LL Apr 24 23:48:26.836894 containerd[1460]: time="2026-04-24T23:48:26.834374353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647b959c57-7mztb,Uid:c44becb5-fcd6-4bd1-91f5-38af42d74697,Namespace:calico-system,Attempt:1,} returns sandbox id \"39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2\"" Apr 24 23:48:26.847160 containerd[1460]: time="2026-04-24T23:48:26.847059681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:26.847321 containerd[1460]: time="2026-04-24T23:48:26.847141238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:26.847565 containerd[1460]: time="2026-04-24T23:48:26.847359926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.847565 containerd[1460]: time="2026-04-24T23:48:26.847469471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:26.863074 systemd-networkd[1391]: cali349da596802: Link UP Apr 24 23:48:26.863214 systemd-networkd[1391]: cali349da596802: Gained carrier Apr 24 23:48:26.866895 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:26.878943 systemd[1]: Started cri-containerd-32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579.scope - libcontainer container 32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579. Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:25.420 [ERROR][4097] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:25.465 [INFO][4097] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0 calico-kube-controllers-58c4885b55- calico-system c75b49fd-5383-4d1b-9dab-badcf7790241 938 0 2026-04-24 23:48:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58c4885b55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58c4885b55-2kpm5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali349da596802 [] [] }} ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:25.465 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:25.540 [INFO][4192] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" HandleID="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:25.558 [INFO][4192] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" HandleID="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58c4885b55-2kpm5", "timestamp":"2026-04-24 23:48:25.540453652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00045a2c0)} Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:25.558 [INFO][4192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.669 [INFO][4192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.670 [INFO][4192] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.673 [INFO][4192] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.717 [INFO][4192] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.736 [INFO][4192] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.750 [INFO][4192] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.798 [INFO][4192] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.815 [INFO][4192] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.828 [INFO][4192] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.842 [INFO][4192] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.849 [INFO][4192] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.849 [INFO][4192] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" host="localhost" Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.849 [INFO][4192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:26.918034 containerd[1460]: 2026-04-24 23:48:26.849 [INFO][4192] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" HandleID="k8s-pod-network.6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.919508 containerd[1460]: 2026-04-24 23:48:26.857 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0", GenerateName:"calico-kube-controllers-58c4885b55-", Namespace:"calico-system", SelfLink:"", UID:"c75b49fd-5383-4d1b-9dab-badcf7790241", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c4885b55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58c4885b55-2kpm5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349da596802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.919508 containerd[1460]: 2026-04-24 23:48:26.858 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.919508 containerd[1460]: 2026-04-24 23:48:26.858 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali349da596802 ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.919508 containerd[1460]: 2026-04-24 23:48:26.868 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.919508 containerd[1460]: 2026-04-24 23:48:26.877 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0", GenerateName:"calico-kube-controllers-58c4885b55-", Namespace:"calico-system", SelfLink:"", UID:"c75b49fd-5383-4d1b-9dab-badcf7790241", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c4885b55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e", Pod:"calico-kube-controllers-58c4885b55-2kpm5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349da596802", MAC:"6a:83:2f:de:15:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:26.919508 containerd[1460]: 2026-04-24 23:48:26.909 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e" Namespace="calico-system" Pod="calico-kube-controllers-58c4885b55-2kpm5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:26.948448 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:26.997615 containerd[1460]: time="2026-04-24T23:48:26.997348836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8djts,Uid:76788716-614e-4c65-978c-a90311cc57b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638\"" Apr 24 23:48:27.041016 systemd-networkd[1391]: cali8c5e137767a: Link UP Apr 24 23:48:27.046814 systemd-networkd[1391]: cali8c5e137767a: Gained carrier Apr 24 23:48:27.049087 containerd[1460]: time="2026-04-24T23:48:27.048558735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:27.049087 containerd[1460]: time="2026-04-24T23:48:27.048638452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:27.049087 containerd[1460]: time="2026-04-24T23:48:27.048651373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:27.049087 containerd[1460]: time="2026-04-24T23:48:27.048755555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:27.057971 systemd-networkd[1391]: cali4a6f21c604e: Gained IPv6LL Apr 24 23:48:27.093537 containerd[1460]: time="2026-04-24T23:48:27.092757087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6fp8r,Uid:730a30d0-cc29-4d69-b761-41db17443064,Namespace:kube-system,Attempt:1,} returns sandbox id \"32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579\"" Apr 24 23:48:27.100514 kubelet[2505]: E0424 23:48:27.100139 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.351 [ERROR][4512] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.511 [INFO][4512] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8bf6f76b--6k2b6-eth0 whisker-8bf6f76b- calico-system ab9fbc96-b40f-4822-b1a2-7020989db95d 972 0 2026-04-24 23:48:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8bf6f76b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8bf6f76b-6k2b6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8c5e137767a [] [] }} ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.511 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.662 [INFO][4562] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" HandleID="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Workload="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.674 [INFO][4562] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" HandleID="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Workload="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000330480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8bf6f76b-6k2b6", "timestamp":"2026-04-24 23:48:26.662864169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ec580)} Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.676 [INFO][4562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.849 [INFO][4562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.849 [INFO][4562] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.855 [INFO][4562] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.868 [INFO][4562] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.880 [INFO][4562] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.934 [INFO][4562] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.948 [INFO][4562] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.948 [INFO][4562] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.952 [INFO][4562] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:26.968 [INFO][4562] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:27.025 [INFO][4562] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:27.025 [INFO][4562] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" host="localhost" Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:27.025 [INFO][4562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:27.116241 containerd[1460]: 2026-04-24 23:48:27.025 [INFO][4562] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" HandleID="k8s-pod-network.de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Workload="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.118265 containerd[1460]: 2026-04-24 23:48:27.031 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8bf6f76b--6k2b6-eth0", GenerateName:"whisker-8bf6f76b-", Namespace:"calico-system", SelfLink:"", UID:"ab9fbc96-b40f-4822-b1a2-7020989db95d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8bf6f76b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8bf6f76b-6k2b6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8c5e137767a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:27.118265 containerd[1460]: 2026-04-24 23:48:27.032 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.118265 containerd[1460]: 2026-04-24 23:48:27.032 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c5e137767a ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.118265 containerd[1460]: 2026-04-24 23:48:27.048 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.118265 containerd[1460]: 2026-04-24 23:48:27.049 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8bf6f76b--6k2b6-eth0", GenerateName:"whisker-8bf6f76b-", Namespace:"calico-system", SelfLink:"", UID:"ab9fbc96-b40f-4822-b1a2-7020989db95d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8bf6f76b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e", Pod:"whisker-8bf6f76b-6k2b6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8c5e137767a", MAC:"ae:18:5f:bd:aa:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:27.118265 containerd[1460]: 2026-04-24 23:48:27.098 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e" Namespace="calico-system" Pod="whisker-8bf6f76b-6k2b6" WorkloadEndpoint="localhost-k8s-whisker--8bf6f76b--6k2b6-eth0" Apr 24 23:48:27.118930 systemd[1]: Started cri-containerd-6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e.scope - libcontainer container 6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e. Apr 24 23:48:27.121730 containerd[1460]: time="2026-04-24T23:48:27.121699318Z" level=info msg="CreateContainer within sandbox \"32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:48:27.138503 containerd[1460]: time="2026-04-24T23:48:27.138250367Z" level=info msg="CreateContainer within sandbox \"32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6b4b6ad1532ae94fe7a79cb58010c19de6805075f439aa84f29dc67b2f432b5\"" Apr 24 23:48:27.140307 containerd[1460]: time="2026-04-24T23:48:27.139495740Z" level=info msg="StartContainer for \"b6b4b6ad1532ae94fe7a79cb58010c19de6805075f439aa84f29dc67b2f432b5\"" Apr 24 23:48:27.144177 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:27.160063 containerd[1460]: time="2026-04-24T23:48:27.155984630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:48:27.160063 containerd[1460]: time="2026-04-24T23:48:27.156103277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:48:27.160063 containerd[1460]: time="2026-04-24T23:48:27.156127565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:27.160063 containerd[1460]: time="2026-04-24T23:48:27.156227521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:48:27.187022 systemd[1]: Started cri-containerd-de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e.scope - libcontainer container de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e. Apr 24 23:48:27.208197 systemd[1]: Started cri-containerd-b6b4b6ad1532ae94fe7a79cb58010c19de6805075f439aa84f29dc67b2f432b5.scope - libcontainer container b6b4b6ad1532ae94fe7a79cb58010c19de6805075f439aa84f29dc67b2f432b5. Apr 24 23:48:27.232088 containerd[1460]: time="2026-04-24T23:48:27.231765145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c4885b55-2kpm5,Uid:c75b49fd-5383-4d1b-9dab-badcf7790241,Namespace:calico-system,Attempt:1,} returns sandbox id \"6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e\"" Apr 24 23:48:27.239429 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:48:27.250606 systemd-networkd[1391]: cali8f135889d78: Gained IPv6LL Apr 24 23:48:27.259068 containerd[1460]: time="2026-04-24T23:48:27.258950781Z" level=info msg="StartContainer for \"b6b4b6ad1532ae94fe7a79cb58010c19de6805075f439aa84f29dc67b2f432b5\" returns successfully" Apr 24 23:48:27.281740 containerd[1460]: time="2026-04-24T23:48:27.281658330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bf6f76b-6k2b6,Uid:ab9fbc96-b40f-4822-b1a2-7020989db95d,Namespace:calico-system,Attempt:0,} returns sandbox id \"de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e\"" Apr 24 23:48:27.633250 systemd-networkd[1391]: cali8552979bafc: Gained IPv6LL Apr 24 23:48:27.745331 kubelet[2505]: E0424 23:48:27.745102 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:27.751475 kubelet[2505]: E0424 23:48:27.751428 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:27.766042 kubelet[2505]: I0424 23:48:27.765941 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6fp8r" podStartSLOduration=35.765901507 podStartE2EDuration="35.765901507s" podCreationTimestamp="2026-04-24 23:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:48:27.764549466 +0000 UTC m=+41.605476549" watchObservedRunningTime="2026-04-24 23:48:27.765901507 +0000 UTC m=+41.606828587" Apr 24 23:48:27.825117 systemd-networkd[1391]: cali5dd4fb62c59: Gained IPv6LL Apr 24 23:48:28.209875 systemd-networkd[1391]: cali8c5e137767a: Gained IPv6LL Apr 24 23:48:28.274230 systemd-networkd[1391]: cali349da596802: Gained IPv6LL Apr 24 23:48:28.275283 systemd-networkd[1391]: cali96875bcbaa5: Gained IPv6LL Apr 24 23:48:28.755590 kubelet[2505]: E0424 23:48:28.755485 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:28.756402 kubelet[2505]: E0424 23:48:28.755630 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:29.772563 kubelet[2505]: E0424 23:48:29.772347 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:30.067234 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:46972.service - OpenSSH per-connection server daemon (10.0.0.1:46972). Apr 24 23:48:30.116426 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 46972 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:30.118281 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:30.126425 systemd-logind[1438]: New session 9 of user core. Apr 24 23:48:30.133986 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:48:30.161968 containerd[1460]: time="2026-04-24T23:48:30.160918967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:30.161968 containerd[1460]: time="2026-04-24T23:48:30.161214886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:48:30.166118 containerd[1460]: time="2026-04-24T23:48:30.165823632Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:30.174415 containerd[1460]: time="2026-04-24T23:48:30.174191124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:30.175310 containerd[1460]: time="2026-04-24T23:48:30.175278937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.951755617s" Apr 24 23:48:30.175492 containerd[1460]: time="2026-04-24T23:48:30.175323152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:48:30.178721 containerd[1460]: time="2026-04-24T23:48:30.177399236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:48:30.187088 containerd[1460]: time="2026-04-24T23:48:30.186854700Z" level=info msg="CreateContainer within sandbox \"90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:48:30.208196 containerd[1460]: time="2026-04-24T23:48:30.207621415Z" level=info msg="CreateContainer within sandbox \"90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ed352a0e7c3e0d3433888cb4f967b0b0ad504edfd88e1a45c54068b505ba8cd\"" Apr 24 23:48:30.222073 containerd[1460]: time="2026-04-24T23:48:30.221661727Z" level=info msg="StartContainer for \"0ed352a0e7c3e0d3433888cb4f967b0b0ad504edfd88e1a45c54068b505ba8cd\"" Apr 24 23:48:30.291072 systemd[1]: Started cri-containerd-0ed352a0e7c3e0d3433888cb4f967b0b0ad504edfd88e1a45c54068b505ba8cd.scope - libcontainer container 0ed352a0e7c3e0d3433888cb4f967b0b0ad504edfd88e1a45c54068b505ba8cd. Apr 24 23:48:30.329033 kubelet[2505]: I0424 23:48:30.327665 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:48:30.329033 kubelet[2505]: E0424 23:48:30.328372 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:30.341040 sshd[4925]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:30.349511 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:46972.service: Deactivated successfully. Apr 24 23:48:30.351855 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:48:30.355223 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:48:30.357395 systemd-logind[1438]: Removed session 9. Apr 24 23:48:30.379607 containerd[1460]: time="2026-04-24T23:48:30.379385574Z" level=info msg="StartContainer for \"0ed352a0e7c3e0d3433888cb4f967b0b0ad504edfd88e1a45c54068b505ba8cd\" returns successfully" Apr 24 23:48:30.784646 kubelet[2505]: E0424 23:48:30.784391 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:48:30.844728 kubelet[2505]: I0424 23:48:30.839833 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-647b959c57-h2mjz" podStartSLOduration=24.885042416 podStartE2EDuration="28.839768651s" podCreationTimestamp="2026-04-24 23:48:02 +0000 UTC" firstStartedPulling="2026-04-24 23:48:26.222342684 +0000 UTC m=+40.063269764" lastFinishedPulling="2026-04-24 23:48:30.17706892 +0000 UTC m=+44.017995999" observedRunningTime="2026-04-24 23:48:30.838284422 +0000 UTC m=+44.679211502" watchObservedRunningTime="2026-04-24 23:48:30.839768651 +0000 UTC m=+44.680695752" Apr 24 23:48:31.260850 kernel: calico-node[5036]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:48:31.724133 systemd-networkd[1391]: vxlan.calico: Link UP Apr 24 23:48:31.724874 systemd-networkd[1391]: vxlan.calico: Gained carrier Apr 24 23:48:31.790373 kubelet[2505]: I0424 23:48:31.790294 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:48:32.534751 containerd[1460]: time="2026-04-24T23:48:32.534665056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:32.536250 containerd[1460]: time="2026-04-24T23:48:32.535130780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:48:32.536250 containerd[1460]: time="2026-04-24T23:48:32.536024961Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:32.539984 containerd[1460]: time="2026-04-24T23:48:32.539942763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.36251507s" Apr 24 23:48:32.539984 containerd[1460]: time="2026-04-24T23:48:32.539982753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:48:32.540434 containerd[1460]: time="2026-04-24T23:48:32.540394467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:32.543907 containerd[1460]: time="2026-04-24T23:48:32.543875060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:48:32.547070 containerd[1460]: time="2026-04-24T23:48:32.547015912Z" level=info msg="CreateContainer within sandbox \"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:48:32.565421 containerd[1460]: time="2026-04-24T23:48:32.565227578Z" level=info msg="CreateContainer within sandbox \"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b0552cf761ce03d9ba3ff415a2e173436b942f30131536baf4a1d909619c9f6\"" Apr 24 23:48:32.569220 containerd[1460]: time="2026-04-24T23:48:32.567266832Z" level=info msg="StartContainer for \"9b0552cf761ce03d9ba3ff415a2e173436b942f30131536baf4a1d909619c9f6\"" Apr 24 23:48:32.609101 systemd[1]: Started cri-containerd-9b0552cf761ce03d9ba3ff415a2e173436b942f30131536baf4a1d909619c9f6.scope - libcontainer container 9b0552cf761ce03d9ba3ff415a2e173436b942f30131536baf4a1d909619c9f6. Apr 24 23:48:32.640313 containerd[1460]: time="2026-04-24T23:48:32.640276706Z" level=info msg="StartContainer for \"9b0552cf761ce03d9ba3ff415a2e173436b942f30131536baf4a1d909619c9f6\" returns successfully" Apr 24 23:48:32.993111 containerd[1460]: time="2026-04-24T23:48:32.992869446Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:32.994061 containerd[1460]: time="2026-04-24T23:48:32.993330457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:48:32.996648 containerd[1460]: time="2026-04-24T23:48:32.996604326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 452.673864ms" Apr 24 23:48:32.996648 containerd[1460]: time="2026-04-24T23:48:32.996647270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:48:33.000890 containerd[1460]: time="2026-04-24T23:48:33.000555391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:48:33.008078 containerd[1460]: time="2026-04-24T23:48:33.008013498Z" level=info msg="CreateContainer within sandbox \"39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:48:33.023912 containerd[1460]: time="2026-04-24T23:48:33.023839604Z" level=info msg="CreateContainer within sandbox \"39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22f7223d017b579fd3768e06a48b499c9ede14e2ab06160a7ceca4b8d1ffb640\"" Apr 24 23:48:33.025084 containerd[1460]: time="2026-04-24T23:48:33.025033709Z" level=info msg="StartContainer for \"22f7223d017b579fd3768e06a48b499c9ede14e2ab06160a7ceca4b8d1ffb640\"" Apr 24 23:48:33.057968 systemd[1]: Started cri-containerd-22f7223d017b579fd3768e06a48b499c9ede14e2ab06160a7ceca4b8d1ffb640.scope - libcontainer container 22f7223d017b579fd3768e06a48b499c9ede14e2ab06160a7ceca4b8d1ffb640. Apr 24 23:48:33.162607 containerd[1460]: time="2026-04-24T23:48:33.162287877Z" level=info msg="StartContainer for \"22f7223d017b579fd3768e06a48b499c9ede14e2ab06160a7ceca4b8d1ffb640\" returns successfully" Apr 24 23:48:33.394291 systemd-networkd[1391]: vxlan.calico: Gained IPv6LL Apr 24 23:48:33.833990 kubelet[2505]: I0424 23:48:33.833872 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-647b959c57-7mztb" podStartSLOduration=25.670197623 podStartE2EDuration="31.833852408s" podCreationTimestamp="2026-04-24 23:48:02 +0000 UTC" firstStartedPulling="2026-04-24 23:48:26.83658039 +0000 UTC m=+40.677507470" lastFinishedPulling="2026-04-24 23:48:33.000235149 +0000 UTC m=+46.841162255" observedRunningTime="2026-04-24 23:48:33.825977776 +0000 UTC m=+47.666904856" watchObservedRunningTime="2026-04-24 23:48:33.833852408 +0000 UTC m=+47.674779543" Apr 24 23:48:34.822750 kubelet[2505]: I0424 23:48:34.822598 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:48:35.355201 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:46976.service - OpenSSH per-connection server daemon (10.0.0.1:46976). Apr 24 23:48:35.407331 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 46976 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:35.409075 sshd[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:35.415145 systemd-logind[1438]: New session 10 of user core. Apr 24 23:48:35.419947 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:48:35.488565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186594349.mount: Deactivated successfully. Apr 24 23:48:35.589849 sshd[5241]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:35.593986 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:46976.service: Deactivated successfully. Apr 24 23:48:35.596510 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:48:35.597421 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:48:35.600399 systemd-logind[1438]: Removed session 10. Apr 24 23:48:35.818837 containerd[1460]: time="2026-04-24T23:48:35.818495971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:35.821216 containerd[1460]: time="2026-04-24T23:48:35.819361049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:48:35.821216 containerd[1460]: time="2026-04-24T23:48:35.820379810Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:35.823149 containerd[1460]: time="2026-04-24T23:48:35.823114011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:35.823849 containerd[1460]: time="2026-04-24T23:48:35.823817544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.823077157s" Apr 24 23:48:35.823963 containerd[1460]: time="2026-04-24T23:48:35.823855303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:48:35.825592 containerd[1460]: time="2026-04-24T23:48:35.825560869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:48:35.829345 containerd[1460]: time="2026-04-24T23:48:35.829302397Z" level=info msg="CreateContainer within sandbox \"d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:48:35.853667 containerd[1460]: time="2026-04-24T23:48:35.853323202Z" level=info msg="CreateContainer within sandbox \"d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3ebbc8ba09855fd1940b4abcf7cf99c8f710380ab6bac9e04bc699cea4e658b0\"" Apr 24 23:48:35.856613 containerd[1460]: time="2026-04-24T23:48:35.856394175Z" level=info msg="StartContainer for \"3ebbc8ba09855fd1940b4abcf7cf99c8f710380ab6bac9e04bc699cea4e658b0\"" Apr 24 23:48:35.983971 systemd[1]: Started cri-containerd-3ebbc8ba09855fd1940b4abcf7cf99c8f710380ab6bac9e04bc699cea4e658b0.scope - libcontainer container 3ebbc8ba09855fd1940b4abcf7cf99c8f710380ab6bac9e04bc699cea4e658b0. Apr 24 23:48:36.076486 containerd[1460]: time="2026-04-24T23:48:36.074436281Z" level=info msg="StartContainer for \"3ebbc8ba09855fd1940b4abcf7cf99c8f710380ab6bac9e04bc699cea4e658b0\" returns successfully" Apr 24 23:48:36.866372 systemd[1]: run-containerd-runc-k8s.io-3ebbc8ba09855fd1940b4abcf7cf99c8f710380ab6bac9e04bc699cea4e658b0-runc.knA3vh.mount: Deactivated successfully. Apr 24 23:48:40.618517 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:35648.service - OpenSSH per-connection server daemon (10.0.0.1:35648). Apr 24 23:48:40.694883 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 35648 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:40.696618 sshd[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:40.701999 systemd-logind[1438]: New session 11 of user core. Apr 24 23:48:40.709980 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:48:40.934170 sshd[5367]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:40.938528 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:35648.service: Deactivated successfully. Apr 24 23:48:40.941672 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:48:40.943998 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:48:40.945036 systemd-logind[1438]: Removed session 11. Apr 24 23:48:42.449933 containerd[1460]: time="2026-04-24T23:48:42.449536919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:42.449933 containerd[1460]: time="2026-04-24T23:48:42.449893528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:48:42.451639 containerd[1460]: time="2026-04-24T23:48:42.451092873Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:42.453405 containerd[1460]: time="2026-04-24T23:48:42.453369016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:42.454117 containerd[1460]: time="2026-04-24T23:48:42.454083994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 6.62848805s" Apr 24 23:48:42.454117 containerd[1460]: time="2026-04-24T23:48:42.454119430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:48:42.456823 containerd[1460]: time="2026-04-24T23:48:42.456630621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:48:42.470568 containerd[1460]: time="2026-04-24T23:48:42.470524174Z" level=info msg="CreateContainer within sandbox \"6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:48:42.483948 containerd[1460]: time="2026-04-24T23:48:42.483905067Z" level=info msg="CreateContainer within sandbox \"6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"db117f99a0eb0f42569830e11420b0695609d4535131439df109179ca7d629d5\"" Apr 24 23:48:42.484375 containerd[1460]: time="2026-04-24T23:48:42.484348371Z" level=info msg="StartContainer for \"db117f99a0eb0f42569830e11420b0695609d4535131439df109179ca7d629d5\"" Apr 24 23:48:42.581951 systemd[1]: Started cri-containerd-db117f99a0eb0f42569830e11420b0695609d4535131439df109179ca7d629d5.scope - libcontainer container db117f99a0eb0f42569830e11420b0695609d4535131439df109179ca7d629d5. Apr 24 23:48:42.618974 containerd[1460]: time="2026-04-24T23:48:42.618811275Z" level=info msg="StartContainer for \"db117f99a0eb0f42569830e11420b0695609d4535131439df109179ca7d629d5\" returns successfully" Apr 24 23:48:42.972424 kubelet[2505]: I0424 23:48:42.971697 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-8djts" podStartSLOduration=32.211810955 podStartE2EDuration="40.971608463s" podCreationTimestamp="2026-04-24 23:48:02 +0000 UTC" firstStartedPulling="2026-04-24 23:48:27.065284374 +0000 UTC m=+40.906211455" lastFinishedPulling="2026-04-24 23:48:35.825081881 +0000 UTC m=+49.666008963" observedRunningTime="2026-04-24 23:48:36.852584735 +0000 UTC m=+50.693511832" watchObservedRunningTime="2026-04-24 23:48:42.971608463 +0000 UTC m=+56.812535562" Apr 24 23:48:42.972424 kubelet[2505]: I0424 23:48:42.972102 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58c4885b55-2kpm5" podStartSLOduration=24.751416483 podStartE2EDuration="39.972094829s" podCreationTimestamp="2026-04-24 23:48:03 +0000 UTC" firstStartedPulling="2026-04-24 23:48:27.235055757 +0000 UTC m=+41.075982837" lastFinishedPulling="2026-04-24 23:48:42.455734101 +0000 UTC m=+56.296661183" observedRunningTime="2026-04-24 23:48:42.966508841 +0000 UTC m=+56.807435924" watchObservedRunningTime="2026-04-24 23:48:42.972094829 +0000 UTC m=+56.813021910" Apr 24 23:48:44.296607 containerd[1460]: time="2026-04-24T23:48:44.296418235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:44.298350 containerd[1460]: time="2026-04-24T23:48:44.296949710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:48:44.298350 containerd[1460]: time="2026-04-24T23:48:44.297850379Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:44.299992 containerd[1460]: time="2026-04-24T23:48:44.299949031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:44.300522 containerd[1460]: time="2026-04-24T23:48:44.300498883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.843831656s" Apr 24 23:48:44.300565 containerd[1460]: time="2026-04-24T23:48:44.300553383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:48:44.302589 containerd[1460]: time="2026-04-24T23:48:44.302566863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:48:44.311228 containerd[1460]: time="2026-04-24T23:48:44.310874128Z" level=info msg="CreateContainer within sandbox \"de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:48:44.326940 containerd[1460]: time="2026-04-24T23:48:44.326849292Z" level=info msg="CreateContainer within sandbox \"de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"29fd5f13fcf266778de2d8087c90d347eae3b9b54b873ffc1fc6f8fc54262791\"" Apr 24 23:48:44.328492 containerd[1460]: time="2026-04-24T23:48:44.328464930Z" level=info msg="StartContainer for \"29fd5f13fcf266778de2d8087c90d347eae3b9b54b873ffc1fc6f8fc54262791\"" Apr 24 23:48:44.367977 systemd[1]: Started cri-containerd-29fd5f13fcf266778de2d8087c90d347eae3b9b54b873ffc1fc6f8fc54262791.scope - libcontainer container 29fd5f13fcf266778de2d8087c90d347eae3b9b54b873ffc1fc6f8fc54262791. Apr 24 23:48:44.526959 containerd[1460]: time="2026-04-24T23:48:44.526795472Z" level=info msg="StartContainer for \"29fd5f13fcf266778de2d8087c90d347eae3b9b54b873ffc1fc6f8fc54262791\" returns successfully" Apr 24 23:48:45.960452 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:35656.service - OpenSSH per-connection server daemon (10.0.0.1:35656). Apr 24 23:48:46.012795 sshd[5534]: Accepted publickey for core from 10.0.0.1 port 35656 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:46.015246 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:46.021258 systemd-logind[1438]: New session 12 of user core. Apr 24 23:48:46.034923 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:48:46.206996 sshd[5534]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:46.215307 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:35656.service: Deactivated successfully. Apr 24 23:48:46.216914 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:48:46.218283 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:48:46.226079 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:58084.service - OpenSSH per-connection server daemon (10.0.0.1:58084). Apr 24 23:48:46.227553 systemd-logind[1438]: Removed session 12. Apr 24 23:48:46.262073 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 58084 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:46.261581 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:46.270429 systemd-logind[1438]: New session 13 of user core. Apr 24 23:48:46.270762 containerd[1460]: time="2026-04-24T23:48:46.270732852Z" level=info msg="StopPodSandbox for \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\"" Apr 24 23:48:46.276900 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.400 [WARNING][5564] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" WorkloadEndpoint="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.401 [INFO][5564] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.401 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" iface="eth0" netns="" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.402 [INFO][5564] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.402 [INFO][5564] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.486 [INFO][5579] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.487 [INFO][5579] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.488 [INFO][5579] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.513 [WARNING][5579] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.513 [INFO][5579] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.517 [INFO][5579] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:46.528051 containerd[1460]: 2026-04-24 23:48:46.520 [INFO][5564] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.534320 sshd[5550]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:46.543481 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:58084.service: Deactivated successfully. Apr 24 23:48:46.544160 containerd[1460]: time="2026-04-24T23:48:46.544043029Z" level=info msg="TearDown network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\" successfully" Apr 24 23:48:46.544469 containerd[1460]: time="2026-04-24T23:48:46.544190119Z" level=info msg="StopPodSandbox for \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\" returns successfully" Apr 24 23:48:46.546260 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:48:46.558029 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:48:46.572203 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:58086.service - OpenSSH per-connection server daemon (10.0.0.1:58086). Apr 24 23:48:46.573509 systemd-logind[1438]: Removed session 13. Apr 24 23:48:46.607246 containerd[1460]: time="2026-04-24T23:48:46.605637926Z" level=info msg="RemovePodSandbox for \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\"" Apr 24 23:48:46.610168 containerd[1460]: time="2026-04-24T23:48:46.609810953Z" level=info msg="Forcibly stopping sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\"" Apr 24 23:48:46.633157 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 58086 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:46.634554 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:46.639874 systemd-logind[1438]: New session 14 of user core. Apr 24 23:48:46.646943 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.652 [WARNING][5604] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" WorkloadEndpoint="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.652 [INFO][5604] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.652 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" iface="eth0" netns="" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.652 [INFO][5604] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.652 [INFO][5604] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.676 [INFO][5613] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.677 [INFO][5613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.677 [INFO][5613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.683 [WARNING][5613] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.683 [INFO][5613] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" HandleID="k8s-pod-network.30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Workload="localhost-k8s-whisker--696958b597--pjb8q-eth0" Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.684 [INFO][5613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:46.687820 containerd[1460]: 2026-04-24 23:48:46.686 [INFO][5604] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b" Apr 24 23:48:46.688103 containerd[1460]: time="2026-04-24T23:48:46.687834348Z" level=info msg="TearDown network for sandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\" successfully" Apr 24 23:48:46.695965 containerd[1460]: time="2026-04-24T23:48:46.695649698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:46.695965 containerd[1460]: time="2026-04-24T23:48:46.695732744Z" level=info msg="RemovePodSandbox \"30f2f1dd2671f2eda1576729c5e0d8c5031e0873741d55f57f71d198572c152b\" returns successfully" Apr 24 23:48:46.700808 containerd[1460]: time="2026-04-24T23:48:46.700434700Z" level=info msg="StopPodSandbox for \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\"" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.755 [WARNING][5639] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"c44becb5-fcd6-4bd1-91f5-38af42d74697", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2", Pod:"calico-apiserver-647b959c57-7mztb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8552979bafc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.755 [INFO][5639] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.755 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" iface="eth0" netns="" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.755 [INFO][5639] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.755 [INFO][5639] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.791 [INFO][5647] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.792 [INFO][5647] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.792 [INFO][5647] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.808 [WARNING][5647] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.810 [INFO][5647] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.814 [INFO][5647] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:46.819307 containerd[1460]: 2026-04-24 23:48:46.816 [INFO][5639] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.820650 containerd[1460]: time="2026-04-24T23:48:46.819319881Z" level=info msg="TearDown network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\" successfully" Apr 24 23:48:46.820650 containerd[1460]: time="2026-04-24T23:48:46.819491407Z" level=info msg="StopPodSandbox for \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\" returns successfully" Apr 24 23:48:46.825872 containerd[1460]: time="2026-04-24T23:48:46.825740157Z" level=info msg="RemovePodSandbox for \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\"" Apr 24 23:48:46.826402 containerd[1460]: time="2026-04-24T23:48:46.825897560Z" level=info msg="Forcibly stopping sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\"" Apr 24 23:48:46.836014 sshd[5591]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:46.840272 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:58086.service: Deactivated successfully. Apr 24 23:48:46.842603 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:48:46.843848 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:48:46.844728 systemd-logind[1438]: Removed session 14. Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.876 [WARNING][5667] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"c44becb5-fcd6-4bd1-91f5-38af42d74697", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39d4933a49d99800556493c13ee953bcf7d214671338865f42eba515b75dbbb2", Pod:"calico-apiserver-647b959c57-7mztb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8552979bafc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.878 [INFO][5667] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.878 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" iface="eth0" netns="" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.878 [INFO][5667] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.878 [INFO][5667] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.971 [INFO][5678] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.972 [INFO][5678] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.972 [INFO][5678] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.981 [WARNING][5678] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.981 [INFO][5678] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" HandleID="k8s-pod-network.2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Workload="localhost-k8s-calico--apiserver--647b959c57--7mztb-eth0" Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.983 [INFO][5678] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:46.989869 containerd[1460]: 2026-04-24 23:48:46.985 [INFO][5667] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335" Apr 24 23:48:46.989869 containerd[1460]: time="2026-04-24T23:48:46.987553586Z" level=info msg="TearDown network for sandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\" successfully" Apr 24 23:48:47.009275 containerd[1460]: time="2026-04-24T23:48:47.009046232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:47.010193 containerd[1460]: time="2026-04-24T23:48:47.009431440Z" level=info msg="RemovePodSandbox \"2a79679c1073001c0c5195bfa04672eab5ddd997556371584db5ae2f66904335\" returns successfully" Apr 24 23:48:47.011179 containerd[1460]: time="2026-04-24T23:48:47.011142149Z" level=info msg="StopPodSandbox for \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\"" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.057 [WARNING][5696] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"730a30d0-cc29-4d69-b761-41db17443064", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579", Pod:"coredns-674b8bbfcf-6fp8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dd4fb62c59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.057 [INFO][5696] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.057 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" iface="eth0" netns="" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.057 [INFO][5696] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.057 [INFO][5696] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.083 [INFO][5704] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.084 [INFO][5704] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.084 [INFO][5704] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.089 [WARNING][5704] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.089 [INFO][5704] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.090 [INFO][5704] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.093731 containerd[1460]: 2026-04-24 23:48:47.092 [INFO][5696] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.095054 containerd[1460]: time="2026-04-24T23:48:47.093797569Z" level=info msg="TearDown network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\" successfully" Apr 24 23:48:47.095054 containerd[1460]: time="2026-04-24T23:48:47.093846351Z" level=info msg="StopPodSandbox for \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\" returns successfully" Apr 24 23:48:47.095054 containerd[1460]: time="2026-04-24T23:48:47.094913552Z" level=info msg="RemovePodSandbox for \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\"" Apr 24 23:48:47.095054 containerd[1460]: time="2026-04-24T23:48:47.094965960Z" level=info msg="Forcibly stopping sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\"" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.146 [WARNING][5722] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"730a30d0-cc29-4d69-b761-41db17443064", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32e8e6f7dfd76a1001827cfb3e00cdfe2221d34d87f87c1c0bd25d4371066579", Pod:"coredns-674b8bbfcf-6fp8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dd4fb62c59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.146 [INFO][5722] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.146 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" iface="eth0" netns="" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.146 [INFO][5722] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.146 [INFO][5722] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.179 [INFO][5730] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.179 [INFO][5730] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.179 [INFO][5730] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.187 [WARNING][5730] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.187 [INFO][5730] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" HandleID="k8s-pod-network.31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Workload="localhost-k8s-coredns--674b8bbfcf--6fp8r-eth0" Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.189 [INFO][5730] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.192952 containerd[1460]: 2026-04-24 23:48:47.190 [INFO][5722] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3" Apr 24 23:48:47.194337 containerd[1460]: time="2026-04-24T23:48:47.193048131Z" level=info msg="TearDown network for sandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\" successfully" Apr 24 23:48:47.216348 containerd[1460]: time="2026-04-24T23:48:47.216269053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:47.217270 containerd[1460]: time="2026-04-24T23:48:47.216411450Z" level=info msg="RemovePodSandbox \"31c43816c0b67eafba22097110f500a25fda94906fa6ab7093bcf002226700d3\" returns successfully" Apr 24 23:48:47.217918 containerd[1460]: time="2026-04-24T23:48:47.217891548Z" level=info msg="StopPodSandbox for \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\"" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.275 [WARNING][5748] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bfrts-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9", Pod:"coredns-674b8bbfcf-bfrts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f135889d78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.276 [INFO][5748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.276 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" iface="eth0" netns="" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.276 [INFO][5748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.276 [INFO][5748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.325 [INFO][5761] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.325 [INFO][5761] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.326 [INFO][5761] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.334 [WARNING][5761] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.334 [INFO][5761] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.336 [INFO][5761] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.340791 containerd[1460]: 2026-04-24 23:48:47.338 [INFO][5748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.342720 containerd[1460]: time="2026-04-24T23:48:47.340873893Z" level=info msg="TearDown network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\" successfully" Apr 24 23:48:47.342720 containerd[1460]: time="2026-04-24T23:48:47.340968146Z" level=info msg="StopPodSandbox for \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\" returns successfully" Apr 24 23:48:47.343479 containerd[1460]: time="2026-04-24T23:48:47.343441341Z" level=info msg="RemovePodSandbox for \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\"" Apr 24 23:48:47.343567 containerd[1460]: time="2026-04-24T23:48:47.343555869Z" level=info msg="Forcibly stopping sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\"" Apr 24 23:48:47.528431 containerd[1460]: time="2026-04-24T23:48:47.528290015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:47.533593 containerd[1460]: time="2026-04-24T23:48:47.529288867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:48:47.533593 containerd[1460]: time="2026-04-24T23:48:47.532170585Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:47.536616 containerd[1460]: time="2026-04-24T23:48:47.536554693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:47.537975 containerd[1460]: time="2026-04-24T23:48:47.537481749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.234738385s" Apr 24 23:48:47.537975 containerd[1460]: time="2026-04-24T23:48:47.537514257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:48:47.541819 containerd[1460]: time="2026-04-24T23:48:47.541672920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:48:47.546838 containerd[1460]: time="2026-04-24T23:48:47.546808549Z" level=info msg="CreateContainer within sandbox \"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.430 [WARNING][5778] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bfrts-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6df4f42b-ef79-4fc6-af47-4c5bb2c7dea4", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f654d85756999a7e1e82cde62f480922823a3afdf21cb08022512cfbf08edff9", Pod:"coredns-674b8bbfcf-bfrts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f135889d78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.432 [INFO][5778] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.432 [INFO][5778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" iface="eth0" netns="" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.432 [INFO][5778] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.432 [INFO][5778] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.524 [INFO][5787] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.526 [INFO][5787] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.526 [INFO][5787] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.542 [WARNING][5787] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.542 [INFO][5787] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" HandleID="k8s-pod-network.b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Workload="localhost-k8s-coredns--674b8bbfcf--bfrts-eth0" Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.546 [INFO][5787] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.550255 containerd[1460]: 2026-04-24 23:48:47.548 [INFO][5778] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b" Apr 24 23:48:47.550914 containerd[1460]: time="2026-04-24T23:48:47.550381286Z" level=info msg="TearDown network for sandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\" successfully" Apr 24 23:48:47.553881 containerd[1460]: time="2026-04-24T23:48:47.553850484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:47.553954 containerd[1460]: time="2026-04-24T23:48:47.553918444Z" level=info msg="RemovePodSandbox \"b907ca4e520fcf33889b7a62ba343708c9df67804dbe0c9547615f94cf54a63b\" returns successfully" Apr 24 23:48:47.554391 containerd[1460]: time="2026-04-24T23:48:47.554367665Z" level=info msg="StopPodSandbox for \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\"" Apr 24 23:48:47.581658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156223443.mount: Deactivated successfully. Apr 24 23:48:47.585620 containerd[1460]: time="2026-04-24T23:48:47.585379044Z" level=info msg="CreateContainer within sandbox \"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"144030c21c992ca746eed7843b39297d6b369ad541732163f123668b830fe17e\"" Apr 24 23:48:47.588483 containerd[1460]: time="2026-04-24T23:48:47.588459262Z" level=info msg="StartContainer for \"144030c21c992ca746eed7843b39297d6b369ad541732163f123668b830fe17e\"" Apr 24 23:48:47.655975 systemd[1]: Started cri-containerd-144030c21c992ca746eed7843b39297d6b369ad541732163f123668b830fe17e.scope - libcontainer container 144030c21c992ca746eed7843b39297d6b369ad541732163f123668b830fe17e. Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.627 [WARNING][5805] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0", GenerateName:"calico-kube-controllers-58c4885b55-", Namespace:"calico-system", SelfLink:"", UID:"c75b49fd-5383-4d1b-9dab-badcf7790241", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c4885b55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e", Pod:"calico-kube-controllers-58c4885b55-2kpm5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349da596802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.629 [INFO][5805] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.629 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" iface="eth0" netns="" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.629 [INFO][5805] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.629 [INFO][5805] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.663 [INFO][5824] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.663 [INFO][5824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.663 [INFO][5824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.670 [WARNING][5824] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.670 [INFO][5824] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.672 [INFO][5824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.675600 containerd[1460]: 2026-04-24 23:48:47.673 [INFO][5805] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.675600 containerd[1460]: time="2026-04-24T23:48:47.675561125Z" level=info msg="TearDown network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\" successfully" Apr 24 23:48:47.675600 containerd[1460]: time="2026-04-24T23:48:47.675580910Z" level=info msg="StopPodSandbox for \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\" returns successfully" Apr 24 23:48:47.676294 containerd[1460]: time="2026-04-24T23:48:47.676251899Z" level=info msg="RemovePodSandbox for \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\"" Apr 24 23:48:47.676326 containerd[1460]: time="2026-04-24T23:48:47.676296349Z" level=info msg="Forcibly stopping sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\"" Apr 24 23:48:47.682178 containerd[1460]: time="2026-04-24T23:48:47.682012519Z" level=info msg="StartContainer for \"144030c21c992ca746eed7843b39297d6b369ad541732163f123668b830fe17e\" returns successfully" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.721 [WARNING][5864] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0", GenerateName:"calico-kube-controllers-58c4885b55-", Namespace:"calico-system", SelfLink:"", UID:"c75b49fd-5383-4d1b-9dab-badcf7790241", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c4885b55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cfb293e33320c1b29ceaf8b5e30626c7c49eb78629536f7940217831a33e37e", Pod:"calico-kube-controllers-58c4885b55-2kpm5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349da596802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.722 [INFO][5864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.722 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" iface="eth0" netns="" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.722 [INFO][5864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.722 [INFO][5864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.751 [INFO][5878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.752 [INFO][5878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.752 [INFO][5878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.758 [WARNING][5878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.758 [INFO][5878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" HandleID="k8s-pod-network.43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Workload="localhost-k8s-calico--kube--controllers--58c4885b55--2kpm5-eth0" Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.760 [INFO][5878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.763478 containerd[1460]: 2026-04-24 23:48:47.761 [INFO][5864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467" Apr 24 23:48:47.763478 containerd[1460]: time="2026-04-24T23:48:47.763542306Z" level=info msg="TearDown network for sandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\" successfully" Apr 24 23:48:47.783049 containerd[1460]: time="2026-04-24T23:48:47.782862376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:47.783049 containerd[1460]: time="2026-04-24T23:48:47.782995693Z" level=info msg="RemovePodSandbox \"43ab3cca724edab0725929137499db918da7f088cc1fd5c22b17ccc995833467\" returns successfully" Apr 24 23:48:47.783825 containerd[1460]: time="2026-04-24T23:48:47.783800200Z" level=info msg="StopPodSandbox for \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\"" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.822 [WARNING][5897] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--8djts-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76788716-614e-4c65-978c-a90311cc57b5", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638", Pod:"goldmane-5b85766d88-8djts", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96875bcbaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.822 [INFO][5897] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.822 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" iface="eth0" netns="" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.822 [INFO][5897] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.822 [INFO][5897] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.843 [INFO][5906] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.844 [INFO][5906] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.844 [INFO][5906] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.849 [WARNING][5906] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.849 [INFO][5906] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.851 [INFO][5906] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:47.853801 containerd[1460]: 2026-04-24 23:48:47.852 [INFO][5897] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:47.854282 containerd[1460]: time="2026-04-24T23:48:47.853828095Z" level=info msg="TearDown network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\" successfully" Apr 24 23:48:47.854282 containerd[1460]: time="2026-04-24T23:48:47.853850024Z" level=info msg="StopPodSandbox for \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\" returns successfully" Apr 24 23:48:47.854423 containerd[1460]: time="2026-04-24T23:48:47.854358530Z" level=info msg="RemovePodSandbox for \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\"" Apr 24 23:48:47.854423 containerd[1460]: time="2026-04-24T23:48:47.854388613Z" level=info msg="Forcibly stopping sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\"" Apr 24 23:48:47.995838 kubelet[2505]: I0424 23:48:47.995646 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zpms9" podStartSLOduration=24.002108997 podStartE2EDuration="44.995622956s" podCreationTimestamp="2026-04-24 23:48:03 +0000 UTC" firstStartedPulling="2026-04-24 23:48:26.54632329 +0000 UTC m=+40.387250369" lastFinishedPulling="2026-04-24 23:48:47.539837248 +0000 UTC m=+61.380764328" observedRunningTime="2026-04-24 23:48:47.993565081 +0000 UTC m=+61.834492161" watchObservedRunningTime="2026-04-24 23:48:47.995622956 +0000 UTC m=+61.836550053" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.899 [WARNING][5924] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--8djts-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76788716-614e-4c65-978c-a90311cc57b5", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0688604281e6ea8f8e8afa754209e2449414d300d3981c2551066db37410638", Pod:"goldmane-5b85766d88-8djts", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96875bcbaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.900 [INFO][5924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.900 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" iface="eth0" netns="" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.900 [INFO][5924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.900 [INFO][5924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.987 [INFO][5933] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.988 [INFO][5933] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:47.988 [INFO][5933] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:48.001 [WARNING][5933] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:48.015 [INFO][5933] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" HandleID="k8s-pod-network.6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Workload="localhost-k8s-goldmane--5b85766d88--8djts-eth0" Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:48.025 [INFO][5933] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:48.032832 containerd[1460]: 2026-04-24 23:48:48.030 [INFO][5924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9" Apr 24 23:48:48.034078 containerd[1460]: time="2026-04-24T23:48:48.032870532Z" level=info msg="TearDown network for sandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\" successfully" Apr 24 23:48:48.036472 containerd[1460]: time="2026-04-24T23:48:48.036425556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:48.036538 containerd[1460]: time="2026-04-24T23:48:48.036506555Z" level=info msg="RemovePodSandbox \"6bf8fc0aaf73adb68bddf7d940c379f97fbde7b0f6f0334ac5dbef9561abccc9\" returns successfully" Apr 24 23:48:48.037874 containerd[1460]: time="2026-04-24T23:48:48.037847511Z" level=info msg="StopPodSandbox for \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\"" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.113 [WARNING][5952] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"ab27142a-e4c1-4ad9-85f0-56df27f44b76", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18", Pod:"calico-apiserver-647b959c57-h2mjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9dd4368cc57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.114 [INFO][5952] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.114 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" iface="eth0" netns="" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.114 [INFO][5952] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.114 [INFO][5952] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.143 [INFO][5961] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.143 [INFO][5961] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.143 [INFO][5961] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.156 [WARNING][5961] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.157 [INFO][5961] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.159 [INFO][5961] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:48.163322 containerd[1460]: 2026-04-24 23:48:48.161 [INFO][5952] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.164893 containerd[1460]: time="2026-04-24T23:48:48.163465543Z" level=info msg="TearDown network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\" successfully" Apr 24 23:48:48.164893 containerd[1460]: time="2026-04-24T23:48:48.163518100Z" level=info msg="StopPodSandbox for \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\" returns successfully" Apr 24 23:48:48.164893 containerd[1460]: time="2026-04-24T23:48:48.163946775Z" level=info msg="RemovePodSandbox for \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\"" Apr 24 23:48:48.164893 containerd[1460]: time="2026-04-24T23:48:48.164001593Z" level=info msg="Forcibly stopping sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\"" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.225 [WARNING][5979] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0", GenerateName:"calico-apiserver-647b959c57-", Namespace:"calico-system", SelfLink:"", UID:"ab27142a-e4c1-4ad9-85f0-56df27f44b76", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647b959c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90763a1dcea9b2684ae68dcf943352b213d90c66444f73a1e76cc8bd949a1e18", Pod:"calico-apiserver-647b959c57-h2mjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9dd4368cc57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.226 [INFO][5979] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.226 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" iface="eth0" netns="" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.226 [INFO][5979] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.226 [INFO][5979] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.254 [INFO][5987] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.254 [INFO][5987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.255 [INFO][5987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.266 [WARNING][5987] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.267 [INFO][5987] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" HandleID="k8s-pod-network.d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Workload="localhost-k8s-calico--apiserver--647b959c57--h2mjz-eth0" Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.270 [INFO][5987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:48.273666 containerd[1460]: 2026-04-24 23:48:48.271 [INFO][5979] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9" Apr 24 23:48:48.276528 containerd[1460]: time="2026-04-24T23:48:48.273786293Z" level=info msg="TearDown network for sandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\" successfully" Apr 24 23:48:48.280979 containerd[1460]: time="2026-04-24T23:48:48.280846218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:48.281157 containerd[1460]: time="2026-04-24T23:48:48.281130331Z" level=info msg="RemovePodSandbox \"d0575025d16dd681e80ffdaa8172207fb920d66c1799ca9f386e1129cab306b9\" returns successfully" Apr 24 23:48:48.282661 containerd[1460]: time="2026-04-24T23:48:48.282618659Z" level=info msg="StopPodSandbox for \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\"" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.331 [WARNING][6005] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zpms9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b177f61-af99-4e0d-af51-f699d327434d", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645", Pod:"csi-node-driver-zpms9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6f21c604e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.331 [INFO][6005] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.331 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" iface="eth0" netns="" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.331 [INFO][6005] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.331 [INFO][6005] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.361 [INFO][6013] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.362 [INFO][6013] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.362 [INFO][6013] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.371 [WARNING][6013] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.371 [INFO][6013] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.372 [INFO][6013] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:48.377122 containerd[1460]: 2026-04-24 23:48:48.374 [INFO][6005] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.377122 containerd[1460]: time="2026-04-24T23:48:48.376900373Z" level=info msg="TearDown network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\" successfully" Apr 24 23:48:48.377122 containerd[1460]: time="2026-04-24T23:48:48.377020912Z" level=info msg="StopPodSandbox for \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\" returns successfully" Apr 24 23:48:48.378879 containerd[1460]: time="2026-04-24T23:48:48.378452762Z" level=info msg="RemovePodSandbox for \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\"" Apr 24 23:48:48.378879 containerd[1460]: time="2026-04-24T23:48:48.378536340Z" level=info msg="Forcibly stopping sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\"" Apr 24 23:48:48.459135 kubelet[2505]: I0424 23:48:48.458892 2505 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:48:48.460256 kubelet[2505]: I0424 23:48:48.460205 2505 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.444 [WARNING][6030] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zpms9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b177f61-af99-4e0d-af51-f699d327434d", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9519314ccac12aa576426e58a431907fd6efd774ae3fc92385cb4d2358b79645", Pod:"csi-node-driver-zpms9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6f21c604e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.445 [INFO][6030] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.445 [INFO][6030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" iface="eth0" netns="" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.445 [INFO][6030] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.445 [INFO][6030] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.486 [INFO][6039] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.487 [INFO][6039] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.487 [INFO][6039] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.500 [WARNING][6039] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.501 [INFO][6039] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" HandleID="k8s-pod-network.f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Workload="localhost-k8s-csi--node--driver--zpms9-eth0" Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.503 [INFO][6039] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:48:48.512594 containerd[1460]: 2026-04-24 23:48:48.508 [INFO][6030] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668" Apr 24 23:48:48.514715 containerd[1460]: time="2026-04-24T23:48:48.512910337Z" level=info msg="TearDown network for sandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\" successfully" Apr 24 23:48:48.518506 containerd[1460]: time="2026-04-24T23:48:48.518469679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:48:48.518974 containerd[1460]: time="2026-04-24T23:48:48.518894012Z" level=info msg="RemovePodSandbox \"f05909fb28a614f70551d90db1fb2a8db32f23524235859bad5f18f4546b2668\" returns successfully" Apr 24 23:48:49.629216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633127928.mount: Deactivated successfully. Apr 24 23:48:49.642337 containerd[1460]: time="2026-04-24T23:48:49.642261988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:49.642832 containerd[1460]: time="2026-04-24T23:48:49.642594945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:48:49.643654 containerd[1460]: time="2026-04-24T23:48:49.643611702Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:49.645507 containerd[1460]: time="2026-04-24T23:48:49.645469333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:48:49.646041 containerd[1460]: time="2026-04-24T23:48:49.646011732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.104294566s" Apr 24 23:48:49.646135 containerd[1460]: time="2026-04-24T23:48:49.646044779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:48:49.654144 containerd[1460]: time="2026-04-24T23:48:49.653943136Z" level=info msg="CreateContainer within sandbox \"de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:48:49.675097 containerd[1460]: time="2026-04-24T23:48:49.674952743Z" level=info msg="CreateContainer within sandbox \"de7dce4a584cb4a49154fe0fd159c31d9384df9929c41e8e886b6c299880267e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ace7612480c24e04e66acb29bc25e4d161b2b5468edcc73d5ad4cbb2e0b6b691\"" Apr 24 23:48:49.677330 containerd[1460]: time="2026-04-24T23:48:49.677299949Z" level=info msg="StartContainer for \"ace7612480c24e04e66acb29bc25e4d161b2b5468edcc73d5ad4cbb2e0b6b691\"" Apr 24 23:48:49.720980 systemd[1]: Started cri-containerd-ace7612480c24e04e66acb29bc25e4d161b2b5468edcc73d5ad4cbb2e0b6b691.scope - libcontainer container ace7612480c24e04e66acb29bc25e4d161b2b5468edcc73d5ad4cbb2e0b6b691. Apr 24 23:48:49.761995 containerd[1460]: time="2026-04-24T23:48:49.761940184Z" level=info msg="StartContainer for \"ace7612480c24e04e66acb29bc25e4d161b2b5468edcc73d5ad4cbb2e0b6b691\" returns successfully" Apr 24 23:48:50.057832 kubelet[2505]: I0424 23:48:50.057535 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8bf6f76b-6k2b6" podStartSLOduration=2.6968251580000002 podStartE2EDuration="25.057492072s" podCreationTimestamp="2026-04-24 23:48:25 +0000 UTC" firstStartedPulling="2026-04-24 23:48:27.288583354 +0000 UTC m=+41.129510434" lastFinishedPulling="2026-04-24 23:48:49.649250268 +0000 UTC m=+63.490177348" observedRunningTime="2026-04-24 23:48:50.054493589 +0000 UTC m=+63.895420669" watchObservedRunningTime="2026-04-24 23:48:50.057492072 +0000 UTC m=+63.898419151" Apr 24 23:48:51.850900 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:58102.service - OpenSSH per-connection server daemon (10.0.0.1:58102). Apr 24 23:48:51.935371 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 58102 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:51.937807 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:51.943602 systemd-logind[1438]: New session 15 of user core. Apr 24 23:48:51.952922 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:48:52.187666 sshd[6094]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:52.192198 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:58102.service: Deactivated successfully. Apr 24 23:48:52.194390 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:48:52.195070 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:48:52.196094 systemd-logind[1438]: Removed session 15. Apr 24 23:48:53.813026 kubelet[2505]: I0424 23:48:53.812651 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:48:57.212054 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). Apr 24 23:48:57.258557 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:57.259892 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:57.264621 systemd-logind[1438]: New session 16 of user core. Apr 24 23:48:57.275982 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:48:57.461375 sshd[6151]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:57.472180 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:37248.service: Deactivated successfully. Apr 24 23:48:57.473706 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:48:57.474828 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:48:57.483836 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:37262.service - OpenSSH per-connection server daemon (10.0.0.1:37262). Apr 24 23:48:57.484600 systemd-logind[1438]: Removed session 16. Apr 24 23:48:57.515946 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 37262 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:57.518096 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:57.524144 systemd-logind[1438]: New session 17 of user core. Apr 24 23:48:57.531919 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:48:57.749593 sshd[6165]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:57.762893 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:37262.service: Deactivated successfully. Apr 24 23:48:57.764407 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:48:57.765765 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:48:57.775023 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:37276.service - OpenSSH per-connection server daemon (10.0.0.1:37276). Apr 24 23:48:57.775618 systemd-logind[1438]: Removed session 17. Apr 24 23:48:57.812220 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 37276 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:57.814240 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:57.820132 systemd-logind[1438]: New session 18 of user core. Apr 24 23:48:57.833942 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:48:58.538299 sshd[6178]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:58.562613 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:37282.service - OpenSSH per-connection server daemon (10.0.0.1:37282). Apr 24 23:48:58.563391 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:37276.service: Deactivated successfully. Apr 24 23:48:58.569895 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:48:58.576171 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:48:58.578993 systemd-logind[1438]: Removed session 18. Apr 24 23:48:58.631600 sshd[6201]: Accepted publickey for core from 10.0.0.1 port 37282 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:58.633305 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:58.641968 systemd-logind[1438]: New session 19 of user core. Apr 24 23:48:58.652975 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:48:59.080998 sshd[6201]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:59.097576 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:37282.service: Deactivated successfully. Apr 24 23:48:59.100426 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:48:59.109122 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:48:59.123157 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:37290.service - OpenSSH per-connection server daemon (10.0.0.1:37290). Apr 24 23:48:59.124950 systemd-logind[1438]: Removed session 19. Apr 24 23:48:59.152926 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 37290 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:48:59.154481 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:48:59.159042 systemd-logind[1438]: New session 20 of user core. Apr 24 23:48:59.167951 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:48:59.302319 sshd[6219]: pam_unix(sshd:session): session closed for user core Apr 24 23:48:59.306452 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:37290.service: Deactivated successfully. Apr 24 23:48:59.308688 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:48:59.309290 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:48:59.310251 systemd-logind[1438]: Removed session 20. Apr 24 23:49:04.320401 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Apr 24 23:49:04.372196 sshd[6239]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:49:04.380990 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:49:04.392850 systemd-logind[1438]: New session 21 of user core. Apr 24 23:49:04.404059 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:49:04.631206 sshd[6239]: pam_unix(sshd:session): session closed for user core Apr 24 23:49:04.635821 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:37294.service: Deactivated successfully. Apr 24 23:49:04.638584 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:49:04.639522 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:49:04.640499 systemd-logind[1438]: Removed session 21. Apr 24 23:49:05.269833 kubelet[2505]: E0424 23:49:05.269530 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:49:09.663025 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:46310.service - OpenSSH per-connection server daemon (10.0.0.1:46310). Apr 24 23:49:09.692987 sshd[6296]: Accepted publickey for core from 10.0.0.1 port 46310 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:49:09.694296 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:49:09.698535 systemd-logind[1438]: New session 22 of user core. Apr 24 23:49:09.709171 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:49:09.881367 sshd[6296]: pam_unix(sshd:session): session closed for user core Apr 24 23:49:09.885808 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:46310.service: Deactivated successfully. Apr 24 23:49:09.893299 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:49:09.895722 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:49:09.897411 systemd-logind[1438]: Removed session 22.