Apr 14 13:31:12.932123 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 13:31:12.932143 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:31:12.932152 kernel: BIOS-provided physical RAM map: Apr 14 13:31:12.932158 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 13:31:12.932163 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 13:31:12.932168 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 13:31:12.932174 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 13:31:12.932179 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 13:31:12.932184 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 13:31:12.932191 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 13:31:12.932196 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 13:31:12.932201 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 13:31:12.932206 kernel: NX (Execute Disable) protection: active Apr 14 13:31:12.932212 kernel: APIC: Static calls initialized Apr 14 13:31:12.932218 kernel: SMBIOS 2.8 present. Apr 14 13:31:12.932226 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 13:31:12.932231 kernel: Hypervisor detected: KVM Apr 14 13:31:12.932237 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 13:31:12.932242 kernel: kvm-clock: using sched offset of 3597009404 cycles Apr 14 13:31:12.932249 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 13:31:12.932254 kernel: tsc: Detected 2793.438 MHz processor Apr 14 13:31:12.932260 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 13:31:12.932266 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 13:31:12.932272 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 13:31:12.932279 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 13:31:12.932285 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 13:31:12.932291 kernel: Using GB pages for direct mapping Apr 14 13:31:12.932297 kernel: ACPI: Early table checksum verification disabled Apr 14 13:31:12.932302 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 13:31:12.932308 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932314 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932319 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932325 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 13:31:12.932332 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932338 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932343 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932349 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.932354 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 13:31:12.932360 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 13:31:12.932397 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 13:31:12.932408 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 13:31:12.932414 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 13:31:12.932420 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 13:31:12.932426 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 13:31:12.932432 kernel: No NUMA configuration found Apr 14 13:31:12.932438 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 13:31:12.932444 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 13:31:12.932452 kernel: Zone ranges: Apr 14 13:31:12.932458 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 13:31:12.932464 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 13:31:12.932470 kernel: Normal empty Apr 14 13:31:12.932476 kernel: Movable zone start for each node Apr 14 13:31:12.932482 kernel: Early memory node ranges Apr 14 13:31:12.932487 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 13:31:12.932492 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 13:31:12.932497 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 13:31:12.932502 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 13:31:12.932508 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 13:31:12.932513 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 13:31:12.932518 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 13:31:12.932523 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 13:31:12.932529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 13:31:12.932534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 13:31:12.932539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 13:31:12.932544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 13:31:12.932549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 13:31:12.932555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 13:31:12.932560 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 13:31:12.932565 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 13:31:12.932570 kernel: TSC deadline timer available Apr 14 13:31:12.932575 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 13:31:12.932580 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 13:31:12.932585 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 13:31:12.932590 kernel: kvm-guest: setup PV sched yield Apr 14 13:31:12.932595 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 13:31:12.932602 kernel: Booting paravirtualized kernel on KVM Apr 14 13:31:12.932607 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 13:31:12.932612 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 13:31:12.932617 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 13:31:12.932622 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 13:31:12.932627 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 13:31:12.932632 kernel: kvm-guest: PV spinlocks enabled Apr 14 13:31:12.932637 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 13:31:12.932643 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:31:12.932649 kernel: random: crng init done Apr 14 13:31:12.932666 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 13:31:12.932671 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 13:31:12.932676 kernel: Fallback order for Node 0: 0 Apr 14 13:31:12.932681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 13:31:12.932686 kernel: Policy zone: DMA32 Apr 14 13:31:12.932691 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 13:31:12.932696 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 14 13:31:12.932703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 13:31:12.932708 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 13:31:12.932713 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 13:31:12.932718 kernel: Dynamic Preempt: voluntary Apr 14 13:31:12.932723 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 13:31:12.932729 kernel: rcu: RCU event tracing is enabled. Apr 14 13:31:12.932734 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 13:31:12.932739 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 13:31:12.932744 kernel: Rude variant of Tasks RCU enabled. Apr 14 13:31:12.932750 kernel: Tracing variant of Tasks RCU enabled. Apr 14 13:31:12.932755 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 13:31:12.932760 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 13:31:12.932765 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 13:31:12.932770 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 13:31:12.932775 kernel: Console: colour VGA+ 80x25 Apr 14 13:31:12.932780 kernel: printk: console [ttyS0] enabled Apr 14 13:31:12.932785 kernel: ACPI: Core revision 20230628 Apr 14 13:31:12.932790 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 13:31:12.932796 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 13:31:12.932801 kernel: x2apic enabled Apr 14 13:31:12.932806 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 13:31:12.932811 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 13:31:12.932816 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 13:31:12.932821 kernel: kvm-guest: setup PV IPIs Apr 14 13:31:12.932826 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 13:31:12.932831 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:31:12.932843 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 13:31:12.932849 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 13:31:12.932854 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 13:31:12.932860 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 13:31:12.932866 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 13:31:12.932872 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 13:31:12.932877 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 13:31:12.932883 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 13:31:12.932890 kernel: RETBleed: Vulnerable Apr 14 13:31:12.932895 kernel: Speculative Store Bypass: Vulnerable Apr 14 13:31:12.932900 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 13:31:12.932906 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 13:31:12.932911 kernel: active return thunk: its_return_thunk Apr 14 13:31:12.932917 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 13:31:12.932922 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 13:31:12.932927 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 13:31:12.932933 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 13:31:12.932940 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 13:31:12.932945 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 13:31:12.932951 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 13:31:12.932972 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 13:31:12.932978 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 13:31:12.932984 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 13:31:12.932989 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 13:31:12.932994 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 13:31:12.933000 kernel: Freeing SMP alternatives memory: 32K Apr 14 13:31:12.933007 kernel: pid_max: default: 32768 minimum: 301 Apr 14 13:31:12.933013 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 13:31:12.933018 kernel: landlock: Up and running. Apr 14 13:31:12.933024 kernel: SELinux: Initializing. Apr 14 13:31:12.933029 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.933035 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.933041 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 13:31:12.933046 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:31:12.933052 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:31:12.933059 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:31:12.933064 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 13:31:12.933070 kernel: signal: max sigframe size: 3632 Apr 14 13:31:12.933075 kernel: rcu: Hierarchical SRCU implementation. Apr 14 13:31:12.933081 kernel: rcu: Max phase no-delay instances is 400. Apr 14 13:31:12.933087 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 13:31:12.933092 kernel: smp: Bringing up secondary CPUs ... Apr 14 13:31:12.933098 kernel: smpboot: x86: Booting SMP configuration: Apr 14 13:31:12.933103 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 13:31:12.933110 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 13:31:12.933115 kernel: smpboot: Max logical packages: 1 Apr 14 13:31:12.933121 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 13:31:12.933126 kernel: devtmpfs: initialized Apr 14 13:31:12.933132 kernel: x86/mm: Memory block size: 128MB Apr 14 13:31:12.933137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 13:31:12.933143 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.933148 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 13:31:12.933154 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 13:31:12.933161 kernel: audit: initializing netlink subsys (disabled) Apr 14 13:31:12.933167 kernel: audit: type=2000 audit(1776173472.073:1): state=initialized audit_enabled=0 res=1 Apr 14 13:31:12.933172 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 13:31:12.933177 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 13:31:12.933183 kernel: cpuidle: using governor menu Apr 14 13:31:12.933188 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 13:31:12.933194 kernel: dca service started, version 1.12.1 Apr 14 13:31:12.933199 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 13:31:12.933205 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 13:31:12.933212 kernel: PCI: Using configuration type 1 for base access Apr 14 13:31:12.933217 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 13:31:12.933223 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 13:31:12.933228 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 13:31:12.933234 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 13:31:12.933240 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 13:31:12.933245 kernel: ACPI: Added _OSI(Module Device) Apr 14 13:31:12.933250 kernel: ACPI: Added _OSI(Processor Device) Apr 14 13:31:12.933256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 13:31:12.933263 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 13:31:12.933269 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 13:31:12.933274 kernel: ACPI: Interpreter enabled Apr 14 13:31:12.933280 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 13:31:12.933285 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 13:31:12.933291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 13:31:12.933296 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 13:31:12.933302 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 13:31:12.933307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 13:31:12.933449 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 13:31:12.933513 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 13:31:12.933570 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 13:31:12.933577 kernel: PCI host bridge to bus 0000:00 Apr 14 13:31:12.933639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 13:31:12.933689 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 13:31:12.933741 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 13:31:12.933790 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 13:31:12.933839 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 13:31:12.933887 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 13:31:12.933936 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 13:31:12.934028 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 13:31:12.934091 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 13:31:12.934151 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 13:31:12.934206 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 13:31:12.934261 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 13:31:12.934316 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 13:31:12.934422 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 13:31:12.934482 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 13:31:12.934538 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 13:31:12.934596 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 13:31:12.934656 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 13:31:12.934713 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 13:31:12.934768 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 13:31:12.934823 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 13:31:12.934884 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 13:31:12.934942 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 13:31:12.935021 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 13:31:12.935077 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 13:31:12.935132 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 13:31:12.935191 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 13:31:12.935246 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 13:31:12.935306 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 13:31:12.935364 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 13:31:12.935536 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 13:31:12.935596 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 13:31:12.935651 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 13:31:12.935658 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 13:31:12.935663 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 13:31:12.935669 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 13:31:12.935675 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 13:31:12.935683 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 13:31:12.935688 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 13:31:12.935694 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 13:31:12.935699 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 13:31:12.935705 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 13:31:12.935710 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 13:31:12.935716 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 13:31:12.935721 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 13:31:12.935727 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 13:31:12.935734 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 13:31:12.935739 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 13:31:12.935745 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 13:31:12.935750 kernel: iommu: Default domain type: Translated Apr 14 13:31:12.935756 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 13:31:12.935761 kernel: PCI: Using ACPI for IRQ routing Apr 14 13:31:12.935767 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 13:31:12.935772 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 13:31:12.935777 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 13:31:12.935834 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 13:31:12.935887 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 13:31:12.935942 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 13:31:12.935949 kernel: vgaarb: loaded Apr 14 13:31:12.935954 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 13:31:12.936143 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 13:31:12.936150 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 13:31:12.936155 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 13:31:12.936165 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 13:31:12.936172 kernel: pnp: PnP ACPI init Apr 14 13:31:12.936243 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 13:31:12.936251 kernel: pnp: PnP ACPI: found 6 devices Apr 14 13:31:12.936257 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 13:31:12.936262 kernel: NET: Registered PF_INET protocol family Apr 14 13:31:12.936268 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 13:31:12.936274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 13:31:12.936281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 13:31:12.936287 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 13:31:12.936293 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 13:31:12.936299 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 13:31:12.936304 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.936310 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.936315 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 13:31:12.936321 kernel: NET: Registered PF_XDP protocol family Apr 14 13:31:12.936410 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 13:31:12.936465 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 13:31:12.936515 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 13:31:12.936565 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 13:31:12.936614 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 13:31:12.936662 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 13:31:12.936669 kernel: PCI: CLS 0 bytes, default 64 Apr 14 13:31:12.936675 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 13:31:12.936681 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:31:12.936686 kernel: Initialise system trusted keyrings Apr 14 13:31:12.936694 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 13:31:12.936699 kernel: Key type asymmetric registered Apr 14 13:31:12.936705 kernel: Asymmetric key parser 'x509' registered Apr 14 13:31:12.936710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 13:31:12.936716 kernel: io scheduler mq-deadline registered Apr 14 13:31:12.936722 kernel: io scheduler kyber registered Apr 14 13:31:12.936728 kernel: io scheduler bfq registered Apr 14 13:31:12.936733 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 13:31:12.936740 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 13:31:12.936746 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 13:31:12.936752 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 13:31:12.936758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 13:31:12.936763 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 13:31:12.936769 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 13:31:12.936774 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 13:31:12.936780 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 13:31:12.936785 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 13:31:12.936846 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 13:31:12.936897 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 13:31:12.936948 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T13:31:12 UTC (1776173472) Apr 14 13:31:12.937022 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 13:31:12.937029 kernel: intel_pstate: CPU model not supported Apr 14 13:31:12.937035 kernel: NET: Registered PF_INET6 protocol family Apr 14 13:31:12.937041 kernel: Segment Routing with IPv6 Apr 14 13:31:12.937046 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 13:31:12.937054 kernel: NET: Registered PF_PACKET protocol family Apr 14 13:31:12.937060 kernel: Key type dns_resolver registered Apr 14 13:31:12.937066 kernel: IPI shorthand broadcast: enabled Apr 14 13:31:12.937071 kernel: sched_clock: Marking stable (844015795, 225244516)->(1133146634, -63886323) Apr 14 13:31:12.937077 kernel: registered taskstats version 1 Apr 14 13:31:12.937083 kernel: Loading compiled-in X.509 certificates Apr 14 13:31:12.937088 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 13:31:12.937094 kernel: Key type .fscrypt registered Apr 14 13:31:12.937099 kernel: Key type fscrypt-provisioning registered Apr 14 13:31:12.937105 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 13:31:12.937112 kernel: ima: Allocated hash algorithm: sha1 Apr 14 13:31:12.937118 kernel: ima: No architecture policies found Apr 14 13:31:12.937123 kernel: clk: Disabling unused clocks Apr 14 13:31:12.937129 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 13:31:12.937134 kernel: Write protecting the kernel read-only data: 36864k Apr 14 13:31:12.937140 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 13:31:12.937145 kernel: Run /init as init process Apr 14 13:31:12.937151 kernel: with arguments: Apr 14 13:31:12.937157 kernel: /init Apr 14 13:31:12.937164 kernel: with environment: Apr 14 13:31:12.937169 kernel: HOME=/ Apr 14 13:31:12.937175 kernel: TERM=linux Apr 14 13:31:12.937182 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:31:12.937190 systemd[1]: Detected virtualization kvm. Apr 14 13:31:12.937196 systemd[1]: Detected architecture x86-64. Apr 14 13:31:12.937202 systemd[1]: Running in initrd. Apr 14 13:31:12.937209 systemd[1]: No hostname configured, using default hostname. Apr 14 13:31:12.937215 systemd[1]: Hostname set to . Apr 14 13:31:12.937221 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:31:12.937227 systemd[1]: Queued start job for default target initrd.target. Apr 14 13:31:12.937232 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:31:12.937238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:31:12.937245 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 13:31:12.937251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:31:12.937258 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 13:31:12.937265 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 13:31:12.937281 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 13:31:12.937287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 13:31:12.937293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:31:12.937300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:31:12.937306 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:31:12.937313 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:31:12.937319 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:31:12.937325 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:31:12.937331 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:31:12.937337 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:31:12.937343 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:31:12.937349 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:31:12.937357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:31:12.937363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:31:12.937400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:31:12.937406 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:31:12.937412 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 13:31:12.937418 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:31:12.937424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 13:31:12.937430 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 13:31:12.937438 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:31:12.937444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:31:12.937451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:12.937457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 13:31:12.937478 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 13:31:12.937496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:31:12.937505 systemd-journald[194]: Journal started Apr 14 13:31:12.937523 systemd-journald[194]: Runtime Journal (/run/log/journal/0779edc061534903bab6c2194be5b28a) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:31:12.941090 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:31:12.941905 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 13:31:12.941935 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 13:31:13.072986 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 13:31:13.073018 kernel: Bridge firewalling registered Apr 14 13:31:12.951621 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:31:12.969161 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 13:31:13.093328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:31:13.093724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:31:13.099799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:13.103157 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:31:13.108088 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:31:13.112542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:31:13.120499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:31:13.124976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:31:13.131836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:31:13.132048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:31:13.141570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:31:13.143799 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:13.148809 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 13:31:13.163690 dracut-cmdline[232]: dracut-dracut-053 Apr 14 13:31:13.166720 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:31:13.170840 systemd-resolved[230]: Positive Trust Anchors: Apr 14 13:31:13.170848 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:31:13.170872 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:31:13.172944 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 14 13:31:13.173715 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:31:13.176597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:31:13.228562 kernel: SCSI subsystem initialized Apr 14 13:31:13.236431 kernel: Loading iSCSI transport class v2.0-870. Apr 14 13:31:13.246526 kernel: iscsi: registered transport (tcp) Apr 14 13:31:13.265790 kernel: iscsi: registered transport (qla4xxx) Apr 14 13:31:13.265906 kernel: QLogic iSCSI HBA Driver Apr 14 13:31:13.298041 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 13:31:13.309546 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 13:31:13.336434 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 13:31:13.336575 kernel: device-mapper: uevent: version 1.0.3 Apr 14 13:31:13.336585 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 13:31:13.377622 kernel: raid6: avx512x4 gen() 45095 MB/s Apr 14 13:31:13.394427 kernel: raid6: avx512x2 gen() 44975 MB/s Apr 14 13:31:13.411438 kernel: raid6: avx512x1 gen() 45121 MB/s Apr 14 13:31:13.428590 kernel: raid6: avx2x4 gen() 37242 MB/s Apr 14 13:31:13.445509 kernel: raid6: avx2x2 gen() 36883 MB/s Apr 14 13:31:13.463446 kernel: raid6: avx2x1 gen() 28185 MB/s Apr 14 13:31:13.463530 kernel: raid6: using algorithm avx512x1 gen() 45121 MB/s Apr 14 13:31:13.481587 kernel: raid6: .... xor() 28299 MB/s, rmw enabled Apr 14 13:31:13.481877 kernel: raid6: using avx512x2 recovery algorithm Apr 14 13:31:13.501603 kernel: xor: automatically using best checksumming function avx Apr 14 13:31:13.639572 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 13:31:13.650621 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:31:13.663621 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:31:13.685084 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 14 13:31:13.689826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:31:13.713086 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 13:31:13.724536 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 14 13:31:13.758575 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:31:13.771853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:31:13.809743 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:31:13.816573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 13:31:13.824066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 13:31:13.826667 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:31:13.831136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:31:13.837192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:31:13.846426 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 13:31:13.853176 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 13:31:13.855777 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 13:31:13.862576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:31:13.866403 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 13:31:13.866429 kernel: GPT:9289727 != 19775487 Apr 14 13:31:13.866437 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 13:31:13.866444 kernel: GPT:9289727 != 19775487 Apr 14 13:31:13.866450 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 13:31:13.866457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:13.862686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:13.872635 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:31:13.872749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:31:13.872864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:13.873285 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:13.886490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:13.891335 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:31:13.894573 kernel: libata version 3.00 loaded. Apr 14 13:31:13.897695 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 13:31:13.910246 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 13:31:13.910468 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 13:31:13.910508 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 13:31:13.910731 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 13:31:13.912555 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 13:31:13.912707 kernel: AES CTR mode by8 optimization enabled Apr 14 13:31:13.913414 kernel: scsi host0: ahci Apr 14 13:31:13.918926 kernel: scsi host1: ahci Apr 14 13:31:13.920459 kernel: scsi host2: ahci Apr 14 13:31:13.921479 kernel: scsi host3: ahci Apr 14 13:31:13.922408 kernel: scsi host4: ahci Apr 14 13:31:13.922518 kernel: scsi host5: ahci Apr 14 13:31:13.922595 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 13:31:13.922604 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 13:31:13.922610 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 13:31:13.922619 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 13:31:13.922626 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 13:31:13.922633 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 13:31:13.940434 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (461) Apr 14 13:31:13.941420 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (483) Apr 14 13:31:13.952734 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 13:31:14.033584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:14.037567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 13:31:14.044190 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 13:31:14.044317 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 13:31:14.052984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:31:14.066679 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 13:31:14.073172 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:31:14.078563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:14.078614 disk-uuid[556]: Primary Header is updated. Apr 14 13:31:14.078614 disk-uuid[556]: Secondary Entries is updated. Apr 14 13:31:14.078614 disk-uuid[556]: Secondary Header is updated. Apr 14 13:31:14.085415 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:14.088533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:14.091542 kernel: block device autoloading is deprecated and will be removed. Apr 14 13:31:14.093688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:14.236563 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.236694 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.237407 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.241540 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 13:31:14.241672 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 13:31:14.241697 kernel: ata3.00: applying bridge limits Apr 14 13:31:14.244416 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.244458 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.245432 kernel: ata3.00: configured for UDMA/100 Apr 14 13:31:14.249535 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 13:31:14.299492 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 13:31:14.299804 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 13:31:14.314478 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 13:31:15.091256 disk-uuid[558]: The operation has completed successfully. Apr 14 13:31:15.094240 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:15.118336 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 13:31:15.118486 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 13:31:15.135016 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 13:31:15.141531 sh[598]: Success Apr 14 13:31:15.157673 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 13:31:15.196787 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 13:31:15.212177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 13:31:15.214092 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 13:31:15.226049 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 13:31:15.226181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:15.226196 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 13:31:15.229433 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 13:31:15.229518 kernel: BTRFS info (device dm-0): using free space tree Apr 14 13:31:15.237025 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 13:31:15.239795 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 13:31:15.254759 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 13:31:15.257094 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 13:31:15.265777 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.265807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:15.265814 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:31:15.270422 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:31:15.277212 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 13:31:15.280273 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.287274 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 13:31:15.294533 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 13:31:15.348491 ignition[693]: Ignition 2.19.0 Apr 14 13:31:15.348506 ignition[693]: Stage: fetch-offline Apr 14 13:31:15.348542 ignition[693]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.348552 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.348683 ignition[693]: parsed url from cmdline: "" Apr 14 13:31:15.348686 ignition[693]: no config URL provided Apr 14 13:31:15.348689 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 13:31:15.348694 ignition[693]: no config at "/usr/lib/ignition/user.ign" Apr 14 13:31:15.348713 ignition[693]: op(1): [started] loading QEMU firmware config module Apr 14 13:31:15.348717 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 13:31:15.363493 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:31:15.362505 ignition[693]: op(1): [finished] loading QEMU firmware config module Apr 14 13:31:15.371543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:31:15.392535 systemd-networkd[787]: lo: Link UP Apr 14 13:31:15.392558 systemd-networkd[787]: lo: Gained carrier Apr 14 13:31:15.393588 systemd-networkd[787]: Enumeration completed Apr 14 13:31:15.394149 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:15.394151 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:31:15.394537 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:31:15.397657 systemd-networkd[787]: eth0: Link UP Apr 14 13:31:15.397661 systemd-networkd[787]: eth0: Gained carrier Apr 14 13:31:15.397668 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:15.397692 systemd[1]: Reached target network.target - Network. Apr 14 13:31:15.420593 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:31:15.506631 ignition[693]: parsing config with SHA512: e940345de7791d16d55b4a65688fee539bc98040f8096b0f1bab6c88d15032106d743ec5d859d95d25684a3ec308eade16a1874bb9ed7e01435cb9830387d3fa Apr 14 13:31:15.510412 unknown[693]: fetched base config from "system" Apr 14 13:31:15.510760 unknown[693]: fetched user config from "qemu" Apr 14 13:31:15.511286 ignition[693]: fetch-offline: fetch-offline passed Apr 14 13:31:15.512711 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.9 Apr 14 13:31:15.511341 ignition[693]: Ignition finished successfully Apr 14 13:31:15.512718 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Apr 14 13:31:15.518066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:31:15.521341 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 13:31:15.530884 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 13:31:15.545062 ignition[791]: Ignition 2.19.0 Apr 14 13:31:15.545080 ignition[791]: Stage: kargs Apr 14 13:31:15.545208 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.545215 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.545815 ignition[791]: kargs: kargs passed Apr 14 13:31:15.545846 ignition[791]: Ignition finished successfully Apr 14 13:31:15.554046 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 13:31:15.563750 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 13:31:15.575568 ignition[800]: Ignition 2.19.0 Apr 14 13:31:15.575599 ignition[800]: Stage: disks Apr 14 13:31:15.575747 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.575754 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.578667 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 13:31:15.576357 ignition[800]: disks: disks passed Apr 14 13:31:15.581351 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 13:31:15.576428 ignition[800]: Ignition finished successfully Apr 14 13:31:15.584741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:31:15.586840 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:31:15.590080 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:31:15.591789 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:31:15.602576 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 13:31:15.617296 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 13:31:15.622154 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 13:31:15.633481 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 13:31:15.728407 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 13:31:15.729069 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 13:31:15.732296 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 13:31:15.752733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:31:15.757880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 13:31:15.767046 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Apr 14 13:31:15.767081 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.767090 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:15.767099 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:31:15.766285 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 13:31:15.773432 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:31:15.766336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 13:31:15.766360 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:31:15.779738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:31:15.783176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 13:31:15.798896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 13:31:15.832541 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 13:31:15.837664 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 14 13:31:15.842918 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 13:31:15.847813 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 13:31:15.930025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 13:31:15.942772 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 13:31:15.946269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 13:31:15.952405 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.971940 ignition[931]: INFO : Ignition 2.19.0 Apr 14 13:31:15.971940 ignition[931]: INFO : Stage: mount Apr 14 13:31:15.971940 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.981863 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.981863 ignition[931]: INFO : mount: mount passed Apr 14 13:31:15.981863 ignition[931]: INFO : Ignition finished successfully Apr 14 13:31:15.974215 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 13:31:15.977109 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 13:31:15.989626 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 13:31:16.224605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 13:31:16.247133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:31:16.254429 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Apr 14 13:31:16.258121 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:16.258252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:16.258268 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:31:16.263407 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:31:16.264063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:31:16.288135 ignition[963]: INFO : Ignition 2.19.0 Apr 14 13:31:16.288135 ignition[963]: INFO : Stage: files Apr 14 13:31:16.288135 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:16.288135 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:16.296253 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 14 13:31:16.296253 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 13:31:16.296253 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 13:31:16.296253 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 13:31:16.296253 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 13:31:16.296253 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 13:31:16.296253 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:31:16.296253 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 13:31:16.292599 unknown[963]: wrote ssh authorized keys file for user: core Apr 14 13:31:16.320938 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 13:31:16.388142 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:31:16.388142 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 13:31:16.388142 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:31:16.402104 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 14 13:31:16.482787 systemd-networkd[787]: eth0: Gained IPv6LL Apr 14 13:31:16.716230 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 13:31:17.373291 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:31:17.373291 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 13:31:17.379881 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 13:31:17.411637 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:31:17.411637 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:31:17.411637 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 13:31:17.411637 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 13:31:17.411637 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 13:31:17.411637 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:31:17.411637 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:31:17.411637 ignition[963]: INFO : files: files passed Apr 14 13:31:17.411637 ignition[963]: INFO : Ignition finished successfully Apr 14 13:31:17.399911 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 13:31:17.415844 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 13:31:17.426133 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 13:31:17.429357 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 13:31:17.429484 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 13:31:17.459022 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 13:31:17.462812 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:31:17.462812 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:31:17.468892 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:31:17.474087 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:31:17.474331 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 13:31:17.491583 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 13:31:17.517738 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 13:31:17.517859 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 13:31:17.521558 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 13:31:17.525669 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 13:31:17.530776 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 13:31:17.531630 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 13:31:17.549464 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:31:17.559796 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 13:31:17.571657 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:31:17.571818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:31:17.577878 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 13:31:17.581302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 13:31:17.581438 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:31:17.586519 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 13:31:17.590240 systemd[1]: Stopped target basic.target - Basic System. Apr 14 13:31:17.593589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 13:31:17.597026 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:31:17.600851 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 13:31:17.604717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 13:31:17.608399 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:31:17.612502 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 13:31:17.616363 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 13:31:17.619847 systemd[1]: Stopped target swap.target - Swaps. Apr 14 13:31:17.622937 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 13:31:17.623087 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:31:17.627838 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:31:17.630051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:31:17.635909 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 13:31:17.636077 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:31:17.640698 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 13:31:17.640810 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 13:31:17.647552 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 13:31:17.647714 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:31:17.652506 systemd[1]: Stopped target paths.target - Path Units. Apr 14 13:31:17.657961 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 13:31:17.659059 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:31:17.659777 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 13:31:17.665027 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 13:31:17.667431 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 13:31:17.667513 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:31:17.672028 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 13:31:17.672121 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:31:17.673496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 13:31:17.673596 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:31:17.676413 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 13:31:17.676495 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 13:31:17.699761 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 13:31:17.704828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 13:31:17.706962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 13:31:17.707117 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:31:17.716831 ignition[1017]: INFO : Ignition 2.19.0 Apr 14 13:31:17.716831 ignition[1017]: INFO : Stage: umount Apr 14 13:31:17.716831 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:17.716831 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:17.716831 ignition[1017]: INFO : umount: umount passed Apr 14 13:31:17.716831 ignition[1017]: INFO : Ignition finished successfully Apr 14 13:31:17.714761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 13:31:17.714873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:31:17.723640 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 13:31:17.723729 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 13:31:17.727065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 13:31:17.727146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 13:31:17.728954 systemd[1]: Stopped target network.target - Network. Apr 14 13:31:17.733262 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 13:31:17.733323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 13:31:17.738572 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 13:31:17.738613 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 13:31:17.742768 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 13:31:17.742804 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 13:31:17.750146 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 13:31:17.750188 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 13:31:17.755696 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 13:31:17.760868 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 13:31:17.769336 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 13:31:17.774251 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 13:31:17.774361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 13:31:17.785202 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 13:31:17.785280 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:31:17.788447 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 14 13:31:17.791291 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 13:31:17.791428 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 13:31:17.793978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 13:31:17.794038 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:31:17.809541 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 13:31:17.811822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 13:31:17.811874 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:31:17.815937 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 13:31:17.815976 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:31:17.819645 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 13:31:17.819678 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 13:31:17.823259 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:31:17.830185 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 13:31:17.830296 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 13:31:17.845642 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 13:31:17.845787 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 13:31:17.849362 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 13:31:17.849449 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 13:31:17.865969 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 13:31:17.866225 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:31:17.867915 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 13:31:17.867944 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 13:31:17.872137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 13:31:17.872166 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:31:17.876863 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 13:31:17.876900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:31:17.882073 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 13:31:17.882115 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 13:31:17.886759 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:31:17.886795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:17.905801 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 13:31:17.905925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 13:31:17.906025 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:31:17.909978 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 13:31:17.910051 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:31:17.918234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 13:31:17.918296 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:31:17.920268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:31:17.920310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:17.927433 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 13:31:17.927540 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 13:31:17.931922 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 13:31:17.937328 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 13:31:17.952642 systemd[1]: Switching root. Apr 14 13:31:17.983796 systemd-journald[194]: Journal stopped Apr 14 13:31:18.889859 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 13:31:18.889916 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 13:31:18.889927 kernel: SELinux: policy capability open_perms=1 Apr 14 13:31:18.889937 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 13:31:18.889949 kernel: SELinux: policy capability always_check_network=0 Apr 14 13:31:18.889956 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 13:31:18.889964 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 13:31:18.889971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 13:31:18.889978 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 13:31:18.890007 kernel: audit: type=1403 audit(1776173478.097:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 13:31:18.890016 systemd[1]: Successfully loaded SELinux policy in 35.411ms. Apr 14 13:31:18.890030 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.411ms. Apr 14 13:31:18.890041 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:31:18.890051 systemd[1]: Detected virtualization kvm. Apr 14 13:31:18.890058 systemd[1]: Detected architecture x86-64. Apr 14 13:31:18.890067 systemd[1]: Detected first boot. Apr 14 13:31:18.890078 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:31:18.890091 zram_generator::config[1062]: No configuration found. Apr 14 13:31:18.890105 systemd[1]: Populated /etc with preset unit settings. Apr 14 13:31:18.890118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 13:31:18.890128 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 13:31:18.890136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 13:31:18.890144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 13:31:18.890153 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 13:31:18.890161 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 13:31:18.890169 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 13:31:18.890181 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 13:31:18.890188 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 13:31:18.890198 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 13:31:18.890206 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 13:31:18.890213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:31:18.890221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:31:18.890229 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 13:31:18.890237 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 13:31:18.890246 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 13:31:18.890253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:31:18.890261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 13:31:18.890271 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:31:18.890279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 13:31:18.890286 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 13:31:18.890294 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 13:31:18.890302 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 13:31:18.890310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:31:18.890318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:31:18.890329 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:31:18.890339 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:31:18.890346 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 13:31:18.890354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 13:31:18.890362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:31:18.890428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:31:18.890437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:31:18.890445 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 13:31:18.890453 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 13:31:18.890461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 13:31:18.890471 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 13:31:18.890484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:18.890492 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 13:31:18.890499 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 13:31:18.890507 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 13:31:18.890516 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 13:31:18.890524 systemd[1]: Reached target machines.target - Containers. Apr 14 13:31:18.890534 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 13:31:18.890542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:31:18.890551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:31:18.890558 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 13:31:18.890567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:31:18.890574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:31:18.890582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:31:18.890590 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 13:31:18.890598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:31:18.890607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 13:31:18.890616 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 13:31:18.890624 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 13:31:18.890631 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 13:31:18.890639 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 13:31:18.890647 kernel: loop: module loaded Apr 14 13:31:18.890657 kernel: fuse: init (API version 7.39) Apr 14 13:31:18.890664 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:31:18.890672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:31:18.890679 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 13:31:18.890689 kernel: ACPI: bus type drm_connector registered Apr 14 13:31:18.890696 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 13:31:18.890723 systemd-journald[1139]: Collecting audit messages is disabled. Apr 14 13:31:18.890740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:31:18.890748 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 13:31:18.890757 systemd-journald[1139]: Journal started Apr 14 13:31:18.890775 systemd-journald[1139]: Runtime Journal (/run/log/journal/0779edc061534903bab6c2194be5b28a) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:31:18.519639 systemd[1]: Queued start job for default target multi-user.target. Apr 14 13:31:18.543739 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 13:31:18.545316 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 13:31:18.892342 systemd[1]: Stopped verity-setup.service. Apr 14 13:31:18.897416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:18.900416 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:31:18.901750 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 13:31:18.903561 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 13:31:18.905471 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 13:31:18.907150 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 13:31:18.908963 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 13:31:18.910795 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 13:31:18.912580 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 13:31:18.914709 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:31:18.916874 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 13:31:18.917039 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 13:31:18.919149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:31:18.919280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:31:18.921297 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:31:18.921529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:31:18.923469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:31:18.923589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:31:18.926152 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 13:31:18.926320 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 13:31:18.928364 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:31:18.928601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:31:18.930684 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:31:18.932863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 13:31:18.935230 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 13:31:18.943086 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:31:18.952134 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 13:31:18.967733 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 13:31:18.971401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 13:31:18.974455 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 13:31:18.974517 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:31:18.978478 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 13:31:18.982522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 13:31:18.986528 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 13:31:18.988742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:31:18.990063 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 13:31:18.993418 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 13:31:18.996399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:31:18.999830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 13:31:19.001674 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:31:19.007610 systemd-journald[1139]: Time spent on flushing to /var/log/journal/0779edc061534903bab6c2194be5b28a is 15.726ms for 954 entries. Apr 14 13:31:19.007610 systemd-journald[1139]: System Journal (/var/log/journal/0779edc061534903bab6c2194be5b28a) is 8.0M, max 195.6M, 187.6M free. Apr 14 13:31:19.038899 systemd-journald[1139]: Received client request to flush runtime journal. Apr 14 13:31:19.038926 kernel: loop0: detected capacity change from 0 to 140768 Apr 14 13:31:19.006556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:31:19.011253 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 13:31:19.015412 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:31:19.018286 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 13:31:19.021434 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 13:31:19.023800 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 13:31:19.026795 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 13:31:19.030412 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 13:31:19.038684 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 13:31:19.052578 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 13:31:19.055057 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 13:31:19.058150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:31:19.066067 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 13:31:19.069719 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 13:31:19.077240 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 13:31:19.077844 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 13:31:19.083455 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Apr 14 13:31:19.083481 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Apr 14 13:31:19.088912 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:31:19.096432 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 13:31:19.100650 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 13:31:19.121036 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 13:31:19.129598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:31:19.133410 kernel: loop2: detected capacity change from 0 to 219192 Apr 14 13:31:19.147980 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 14 13:31:19.148029 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 14 13:31:19.152144 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:31:19.173487 kernel: loop3: detected capacity change from 0 to 140768 Apr 14 13:31:19.190754 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 13:31:19.203413 kernel: loop5: detected capacity change from 0 to 219192 Apr 14 13:31:19.214785 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 13:31:19.215133 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 14 13:31:19.218733 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 13:31:19.218758 systemd[1]: Reloading... Apr 14 13:31:19.279430 zram_generator::config[1225]: No configuration found. Apr 14 13:31:19.348557 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 13:31:19.410527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:19.470557 systemd[1]: Reloading finished in 251 ms. Apr 14 13:31:19.521840 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 13:31:19.525097 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 13:31:19.528052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 13:31:19.548512 systemd[1]: Starting ensure-sysext.service... Apr 14 13:31:19.551717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:31:19.555795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:31:19.560167 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Apr 14 13:31:19.560288 systemd[1]: Reloading... Apr 14 13:31:19.569228 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 13:31:19.569840 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 13:31:19.570566 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 13:31:19.570847 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Apr 14 13:31:19.570921 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Apr 14 13:31:19.573670 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:31:19.573676 systemd-tmpfiles[1269]: Skipping /boot Apr 14 13:31:19.579218 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:31:19.579249 systemd-tmpfiles[1269]: Skipping /boot Apr 14 13:31:19.580644 systemd-udevd[1270]: Using default interface naming scheme 'v255'. Apr 14 13:31:19.603434 zram_generator::config[1296]: No configuration found. Apr 14 13:31:19.635414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1327) Apr 14 13:31:19.682417 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 13:31:19.689418 kernel: ACPI: button: Power Button [PWRF] Apr 14 13:31:19.701034 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 13:31:19.701231 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 13:31:19.701322 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 13:31:19.711421 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 13:31:19.710848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:19.743435 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 13:31:19.760891 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 13:31:19.761260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:31:19.763819 systemd[1]: Reloading finished in 203 ms. Apr 14 13:31:19.824836 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:31:19.837129 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:31:19.933654 systemd[1]: Finished ensure-sysext.service. Apr 14 13:31:19.957773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:19.971597 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:31:19.976221 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 13:31:19.978502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:31:19.979879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:31:19.984634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:31:19.991636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:31:19.996095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:31:19.998838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:31:19.999734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 13:31:20.002928 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 13:31:20.007529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:31:20.011781 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:31:20.016913 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 13:31:20.018891 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 13:31:20.020878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:20.029862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:20.031131 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 13:31:20.033610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:31:20.033753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:31:20.035328 augenrules[1394]: No rules Apr 14 13:31:20.036511 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:31:20.036642 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:31:20.047965 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:31:20.048459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:31:20.048603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:31:20.049086 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:31:20.049768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:31:20.050790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 13:31:20.051261 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 13:31:20.060116 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 13:31:20.074913 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 13:31:20.075066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:31:20.075197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:31:20.078673 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 13:31:20.085682 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:31:20.086590 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 13:31:20.157010 systemd-networkd[1385]: lo: Link UP Apr 14 13:31:20.157032 systemd-networkd[1385]: lo: Gained carrier Apr 14 13:31:20.157936 systemd-networkd[1385]: Enumeration completed Apr 14 13:31:20.158519 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:20.158535 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:31:20.159130 systemd-networkd[1385]: eth0: Link UP Apr 14 13:31:20.159142 systemd-networkd[1385]: eth0: Gained carrier Apr 14 13:31:20.159152 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:20.160061 systemd-resolved[1386]: Positive Trust Anchors: Apr 14 13:31:20.160085 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:31:20.160109 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:31:20.163512 systemd-resolved[1386]: Defaulting to hostname 'linux'. Apr 14 13:31:20.187910 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 13:31:20.188789 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 13:31:20.194689 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 13:31:20.197338 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:31:20.199460 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:31:20.199837 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:31:20.200038 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Apr 14 13:31:20.765699 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 13:31:20.765727 systemd-timesyncd[1387]: Initial clock synchronization to Tue 2026-04-14 13:31:20.765569 UTC. Apr 14 13:31:20.765737 systemd-resolved[1386]: Clock change detected. Flushing caches. Apr 14 13:31:20.767235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:20.769595 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 13:31:20.772070 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 13:31:20.775485 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 13:31:20.782756 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:31:20.784936 systemd[1]: Reached target network.target - Network. Apr 14 13:31:20.786712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:31:20.789190 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:31:20.791133 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 13:31:20.793190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 13:31:20.795562 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 13:31:20.797702 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 13:31:20.797742 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:31:20.799216 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 13:31:20.801311 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 13:31:20.803151 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 13:31:20.805198 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:31:20.807268 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 13:31:20.810315 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 13:31:20.826777 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 13:31:20.830056 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 13:31:20.832986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 13:31:20.835207 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 13:31:20.837226 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:31:20.838821 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:31:20.840371 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:31:20.840388 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:31:20.841518 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 13:31:20.844000 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:31:20.845131 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 13:31:20.850104 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 13:31:20.852680 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 13:31:20.854639 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 13:31:20.855460 jq[1432]: false Apr 14 13:31:20.857047 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 13:31:20.859650 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 13:31:20.866041 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 13:31:20.868441 extend-filesystems[1433]: Found loop3 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found loop4 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found loop5 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found sr0 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda1 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda2 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda3 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found usr Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda4 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda6 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda7 Apr 14 13:31:20.868441 extend-filesystems[1433]: Found vda9 Apr 14 13:31:20.868441 extend-filesystems[1433]: Checking size of /dev/vda9 Apr 14 13:31:20.913757 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 13:31:20.913813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1332) Apr 14 13:31:20.869114 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 13:31:20.913994 extend-filesystems[1433]: Resized partition /dev/vda9 Apr 14 13:31:20.876087 dbus-daemon[1431]: [system] SELinux support is enabled Apr 14 13:31:20.875118 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 13:31:20.919125 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Apr 14 13:31:20.878406 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 13:31:20.878763 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 13:31:20.922971 jq[1450]: true Apr 14 13:31:20.880033 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 13:31:20.885005 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 13:31:20.888691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 13:31:20.894256 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 13:31:20.902077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 13:31:20.902234 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 13:31:20.902475 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 13:31:20.902634 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 13:31:20.909377 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 13:31:20.909550 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 13:31:20.923289 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 13:31:20.930773 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 13:31:20.933849 jq[1459]: true Apr 14 13:31:20.930792 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 13:31:20.934376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 13:31:20.934449 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 13:31:20.939654 update_engine[1449]: I20260414 13:31:20.939132 1449 main.cc:92] Flatcar Update Engine starting Apr 14 13:31:20.942553 tar[1457]: linux-amd64/LICENSE Apr 14 13:31:20.942553 tar[1457]: linux-amd64/helm Apr 14 13:31:20.947384 systemd[1]: Started update-engine.service - Update Engine. Apr 14 13:31:20.948895 update_engine[1449]: I20260414 13:31:20.948695 1449 update_check_scheduler.cc:74] Next update check in 8m25s Apr 14 13:31:20.948962 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 13:31:20.953685 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 13:31:20.967110 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 13:31:20.967134 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 13:31:20.970738 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 13:31:20.970738 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 13:31:20.970738 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 13:31:20.969513 systemd-logind[1445]: New seat seat0. Apr 14 13:31:20.984633 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Apr 14 13:31:20.970455 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 13:31:20.970843 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 13:31:20.973820 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 13:31:21.000781 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Apr 14 13:31:21.002501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 13:31:21.005726 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 13:31:21.016300 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 13:31:21.110632 containerd[1458]: time="2026-04-14T13:31:21.110090758Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 13:31:21.131779 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 13:31:21.132824 containerd[1458]: time="2026-04-14T13:31:21.132618442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134146 containerd[1458]: time="2026-04-14T13:31:21.134101927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134146 containerd[1458]: time="2026-04-14T13:31:21.134139458Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 13:31:21.134194 containerd[1458]: time="2026-04-14T13:31:21.134151780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 13:31:21.134322 containerd[1458]: time="2026-04-14T13:31:21.134286495Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 13:31:21.134322 containerd[1458]: time="2026-04-14T13:31:21.134317948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134383 containerd[1458]: time="2026-04-14T13:31:21.134361659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134402 containerd[1458]: time="2026-04-14T13:31:21.134383967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134563 containerd[1458]: time="2026-04-14T13:31:21.134517620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134587 containerd[1458]: time="2026-04-14T13:31:21.134569592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134603 containerd[1458]: time="2026-04-14T13:31:21.134584980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134603 containerd[1458]: time="2026-04-14T13:31:21.134592730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134867 containerd[1458]: time="2026-04-14T13:31:21.134646480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134867 containerd[1458]: time="2026-04-14T13:31:21.134794721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134867 containerd[1458]: time="2026-04-14T13:31:21.134865071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:21.134960 containerd[1458]: time="2026-04-14T13:31:21.134874712Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 13:31:21.134987 containerd[1458]: time="2026-04-14T13:31:21.134971373Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 13:31:21.135192 containerd[1458]: time="2026-04-14T13:31:21.135079653Z" level=info msg="metadata content store policy set" policy=shared Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143501331Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143580872Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143597661Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143609882Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143623392Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143742438Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.143964161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144039658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144049537Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144060438Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144072920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144102357Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144113190Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.144957 containerd[1458]: time="2026-04-14T13:31:21.144123776Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144134404Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144145200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144154051Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144164328Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144181101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144192483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144202152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144211981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144222243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144233224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144242712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144256977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144266898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145210 containerd[1458]: time="2026-04-14T13:31:21.144278101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144287033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144295737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144305763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144316262Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144331595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144341514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144349969Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144381444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144396399Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144404364Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144413156Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144419928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144429077Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 13:31:21.145386 containerd[1458]: time="2026-04-14T13:31:21.144440899Z" level=info msg="NRI interface is disabled by configuration." Apr 14 13:31:21.145583 containerd[1458]: time="2026-04-14T13:31:21.144452173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 13:31:21.145601 containerd[1458]: time="2026-04-14T13:31:21.144684326Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 13:31:21.145601 containerd[1458]: time="2026-04-14T13:31:21.144724884Z" level=info msg="Connect containerd service" Apr 14 13:31:21.145601 containerd[1458]: time="2026-04-14T13:31:21.144754781Z" level=info msg="using legacy CRI server" Apr 14 13:31:21.145601 containerd[1458]: time="2026-04-14T13:31:21.144759481Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 13:31:21.145601 containerd[1458]: time="2026-04-14T13:31:21.144844783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 13:31:21.145601 containerd[1458]: time="2026-04-14T13:31:21.145481240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:31:21.145772 containerd[1458]: time="2026-04-14T13:31:21.145670350Z" level=info msg="Start subscribing containerd event" Apr 14 13:31:21.145772 containerd[1458]: time="2026-04-14T13:31:21.145709856Z" level=info msg="Start recovering state" Apr 14 13:31:21.145772 containerd[1458]: time="2026-04-14T13:31:21.145752812Z" level=info msg="Start event monitor" Apr 14 13:31:21.145772 containerd[1458]: time="2026-04-14T13:31:21.145759563Z" level=info msg="Start snapshots syncer" Apr 14 13:31:21.145772 containerd[1458]: time="2026-04-14T13:31:21.145765728Z" level=info msg="Start cni network conf syncer for default" Apr 14 13:31:21.145772 containerd[1458]: time="2026-04-14T13:31:21.145772167Z" level=info msg="Start streaming server" Apr 14 13:31:21.146337 containerd[1458]: time="2026-04-14T13:31:21.146289107Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 13:31:21.146377 containerd[1458]: time="2026-04-14T13:31:21.146352028Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 13:31:21.146466 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 13:31:21.146690 containerd[1458]: time="2026-04-14T13:31:21.146677005Z" level=info msg="containerd successfully booted in 0.037331s" Apr 14 13:31:21.155039 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 13:31:21.166340 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 13:31:21.173654 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 13:31:21.173819 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 13:31:21.177477 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 13:31:21.189950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 13:31:21.193990 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 13:31:21.196763 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 13:31:21.198705 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 13:31:21.390611 tar[1457]: linux-amd64/README.md Apr 14 13:31:21.406038 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 13:31:22.103869 systemd-networkd[1385]: eth0: Gained IPv6LL Apr 14 13:31:22.107005 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 13:31:22.109790 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 13:31:22.126338 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 13:31:22.130151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:22.133480 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 13:31:22.150081 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 13:31:22.150369 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 13:31:22.154207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 13:31:22.157859 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 13:31:22.834025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:22.836621 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 13:31:22.838579 systemd[1]: Startup finished in 988ms (kernel) + 5.390s (initrd) + 4.210s (userspace) = 10.589s. Apr 14 13:31:22.838997 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:31:23.204668 kubelet[1543]: E0414 13:31:23.204318 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:31:23.207602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:31:23.207728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:31:26.552748 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 13:31:26.553780 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:53736.service - OpenSSH per-connection server daemon (10.0.0.1:53736). Apr 14 13:31:26.623060 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:26.626827 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:26.634970 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 13:31:26.653319 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 13:31:26.655024 systemd-logind[1445]: New session 1 of user core. Apr 14 13:31:26.664026 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 13:31:26.680367 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 13:31:26.683346 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 13:31:26.768381 systemd[1560]: Queued start job for default target default.target. Apr 14 13:31:26.783115 systemd[1560]: Created slice app.slice - User Application Slice. Apr 14 13:31:26.783155 systemd[1560]: Reached target paths.target - Paths. Apr 14 13:31:26.783167 systemd[1560]: Reached target timers.target - Timers. Apr 14 13:31:26.784351 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 13:31:26.797830 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 13:31:26.797977 systemd[1560]: Reached target sockets.target - Sockets. Apr 14 13:31:26.797989 systemd[1560]: Reached target basic.target - Basic System. Apr 14 13:31:26.798019 systemd[1560]: Reached target default.target - Main User Target. Apr 14 13:31:26.798039 systemd[1560]: Startup finished in 105ms. Apr 14 13:31:26.798351 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 13:31:26.808220 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 13:31:26.871098 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:53738.service - OpenSSH per-connection server daemon (10.0.0.1:53738). Apr 14 13:31:26.932939 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 53738 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:26.934493 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:26.938205 systemd-logind[1445]: New session 2 of user core. Apr 14 13:31:26.948451 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 13:31:27.001458 sshd[1571]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.014254 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:53738.service: Deactivated successfully. Apr 14 13:31:27.015409 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 13:31:27.016386 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Apr 14 13:31:27.017327 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:53748.service - OpenSSH per-connection server daemon (10.0.0.1:53748). Apr 14 13:31:27.018022 systemd-logind[1445]: Removed session 2. Apr 14 13:31:27.052428 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 53748 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.053686 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.057765 systemd-logind[1445]: New session 3 of user core. Apr 14 13:31:27.066276 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 13:31:27.117080 sshd[1578]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.127211 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:53748.service: Deactivated successfully. Apr 14 13:31:27.128351 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 13:31:27.130688 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Apr 14 13:31:27.131804 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:53760.service - OpenSSH per-connection server daemon (10.0.0.1:53760). Apr 14 13:31:27.132415 systemd-logind[1445]: Removed session 3. Apr 14 13:31:27.169364 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 53760 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.170823 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.175122 systemd-logind[1445]: New session 4 of user core. Apr 14 13:31:27.185392 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 13:31:27.243099 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.260379 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:53760.service: Deactivated successfully. Apr 14 13:31:27.262038 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 13:31:27.263835 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Apr 14 13:31:27.265663 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:53766.service - OpenSSH per-connection server daemon (10.0.0.1:53766). Apr 14 13:31:27.266453 systemd-logind[1445]: Removed session 4. Apr 14 13:31:27.304464 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 53766 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.305779 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.311018 systemd-logind[1445]: New session 5 of user core. Apr 14 13:31:27.323192 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 13:31:27.394535 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 13:31:27.395085 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:27.424470 sudo[1595]: pam_unix(sudo:session): session closed for user root Apr 14 13:31:27.427115 sshd[1592]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.444274 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:53766.service: Deactivated successfully. Apr 14 13:31:27.445534 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 13:31:27.447412 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Apr 14 13:31:27.457121 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:53776.service - OpenSSH per-connection server daemon (10.0.0.1:53776). Apr 14 13:31:27.458162 systemd-logind[1445]: Removed session 5. Apr 14 13:31:27.492118 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 53776 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.493873 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.497803 systemd-logind[1445]: New session 6 of user core. Apr 14 13:31:27.507582 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 13:31:27.561371 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 13:31:27.561741 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:27.566028 sudo[1604]: pam_unix(sudo:session): session closed for user root Apr 14 13:31:27.570156 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 13:31:27.570358 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:27.588360 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 13:31:27.590289 auditctl[1607]: No rules Apr 14 13:31:27.591104 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 13:31:27.591275 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 13:31:27.592722 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:31:27.654282 augenrules[1625]: No rules Apr 14 13:31:27.656063 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:31:27.657399 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 14 13:31:27.659206 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.670364 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:53776.service: Deactivated successfully. Apr 14 13:31:27.671642 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 13:31:27.672745 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Apr 14 13:31:27.673718 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:53790.service - OpenSSH per-connection server daemon (10.0.0.1:53790). Apr 14 13:31:27.674361 systemd-logind[1445]: Removed session 6. Apr 14 13:31:27.712671 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 53790 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.713856 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.717486 systemd-logind[1445]: New session 7 of user core. Apr 14 13:31:27.725096 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 13:31:27.777718 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 13:31:27.777999 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:28.033191 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 13:31:28.033226 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 13:31:28.288882 dockerd[1654]: time="2026-04-14T13:31:28.288714416Z" level=info msg="Starting up" Apr 14 13:31:28.503231 dockerd[1654]: time="2026-04-14T13:31:28.503131149Z" level=info msg="Loading containers: start." Apr 14 13:31:28.617952 kernel: Initializing XFRM netlink socket Apr 14 13:31:28.706306 systemd-networkd[1385]: docker0: Link UP Apr 14 13:31:28.731858 dockerd[1654]: time="2026-04-14T13:31:28.731799247Z" level=info msg="Loading containers: done." Apr 14 13:31:28.754244 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2566562076-merged.mount: Deactivated successfully. Apr 14 13:31:28.754695 dockerd[1654]: time="2026-04-14T13:31:28.754624174Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 13:31:28.754817 dockerd[1654]: time="2026-04-14T13:31:28.754773115Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 13:31:28.755120 dockerd[1654]: time="2026-04-14T13:31:28.754889469Z" level=info msg="Daemon has completed initialization" Apr 14 13:31:28.800897 dockerd[1654]: time="2026-04-14T13:31:28.800763728Z" level=info msg="API listen on /run/docker.sock" Apr 14 13:31:28.801094 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 13:31:29.228557 containerd[1458]: time="2026-04-14T13:31:29.228463739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 14 13:31:29.987563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452967144.mount: Deactivated successfully. Apr 14 13:31:30.723179 containerd[1458]: time="2026-04-14T13:31:30.723058046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:30.724063 containerd[1458]: time="2026-04-14T13:31:30.723984715Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947180" Apr 14 13:31:30.725215 containerd[1458]: time="2026-04-14T13:31:30.725175651Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:30.727603 containerd[1458]: time="2026-04-14T13:31:30.727541350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:30.728403 containerd[1458]: time="2026-04-14T13:31:30.728368351Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 1.499810923s" Apr 14 13:31:30.728461 containerd[1458]: time="2026-04-14T13:31:30.728406263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 14 13:31:30.729105 containerd[1458]: time="2026-04-14T13:31:30.729072225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 14 13:31:31.598508 containerd[1458]: time="2026-04-14T13:31:31.598375865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:31.599272 containerd[1458]: time="2026-04-14T13:31:31.599145831Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165744" Apr 14 13:31:31.600562 containerd[1458]: time="2026-04-14T13:31:31.600454649Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:31.603996 containerd[1458]: time="2026-04-14T13:31:31.603882749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:31.605222 containerd[1458]: time="2026-04-14T13:31:31.605176885Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 876.058689ms" Apr 14 13:31:31.605259 containerd[1458]: time="2026-04-14T13:31:31.605222389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 14 13:31:31.605879 containerd[1458]: time="2026-04-14T13:31:31.605833316Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 14 13:31:32.297597 containerd[1458]: time="2026-04-14T13:31:32.297514892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:32.298323 containerd[1458]: time="2026-04-14T13:31:32.298270157Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729779" Apr 14 13:31:32.299854 containerd[1458]: time="2026-04-14T13:31:32.299744208Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:32.302731 containerd[1458]: time="2026-04-14T13:31:32.302611929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:32.303577 containerd[1458]: time="2026-04-14T13:31:32.303546660Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 697.670236ms" Apr 14 13:31:32.303628 containerd[1458]: time="2026-04-14T13:31:32.303601043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 14 13:31:32.305433 containerd[1458]: time="2026-04-14T13:31:32.305214161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 14 13:31:32.960859 kernel: hrtimer: interrupt took 9987376 ns Apr 14 13:31:33.284519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647498318.mount: Deactivated successfully. Apr 14 13:31:33.285558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 13:31:33.296288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:33.405134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:33.412477 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:31:33.477055 kubelet[1885]: E0414 13:31:33.476948 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:31:33.482292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:31:33.482458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:31:33.573416 containerd[1458]: time="2026-04-14T13:31:33.573126836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:33.573978 containerd[1458]: time="2026-04-14T13:31:33.573942308Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861668" Apr 14 13:31:33.575343 containerd[1458]: time="2026-04-14T13:31:33.575311100Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:33.577252 containerd[1458]: time="2026-04-14T13:31:33.577132310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:33.577677 containerd[1458]: time="2026-04-14T13:31:33.577615214Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.272295035s" Apr 14 13:31:33.577677 containerd[1458]: time="2026-04-14T13:31:33.577658806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 14 13:31:33.578194 containerd[1458]: time="2026-04-14T13:31:33.578171473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 14 13:31:34.024553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284084333.mount: Deactivated successfully. Apr 14 13:31:34.808294 containerd[1458]: time="2026-04-14T13:31:34.808163535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.809027 containerd[1458]: time="2026-04-14T13:31:34.808984550Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 14 13:31:34.810260 containerd[1458]: time="2026-04-14T13:31:34.810220218Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.813071 containerd[1458]: time="2026-04-14T13:31:34.813020663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.813965 containerd[1458]: time="2026-04-14T13:31:34.813901763Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.235700073s" Apr 14 13:31:34.813965 containerd[1458]: time="2026-04-14T13:31:34.813965507Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 14 13:31:34.814503 containerd[1458]: time="2026-04-14T13:31:34.814455930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 14 13:31:35.200776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294708417.mount: Deactivated successfully. Apr 14 13:31:35.206247 containerd[1458]: time="2026-04-14T13:31:35.206192695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:35.206748 containerd[1458]: time="2026-04-14T13:31:35.206689677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 14 13:31:35.207937 containerd[1458]: time="2026-04-14T13:31:35.207883812Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:35.210686 containerd[1458]: time="2026-04-14T13:31:35.210557689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:35.211331 containerd[1458]: time="2026-04-14T13:31:35.211296536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 396.79225ms" Apr 14 13:31:35.211331 containerd[1458]: time="2026-04-14T13:31:35.211325756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 14 13:31:35.211996 containerd[1458]: time="2026-04-14T13:31:35.211972409Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 14 13:31:35.687589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683157041.mount: Deactivated successfully. Apr 14 13:31:36.429955 containerd[1458]: time="2026-04-14T13:31:36.429776060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:36.430445 containerd[1458]: time="2026-04-14T13:31:36.430358522Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22873707" Apr 14 13:31:36.431242 containerd[1458]: time="2026-04-14T13:31:36.431202085Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:36.434118 containerd[1458]: time="2026-04-14T13:31:36.434077681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:36.434964 containerd[1458]: time="2026-04-14T13:31:36.434883946Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.222883002s" Apr 14 13:31:36.434964 containerd[1458]: time="2026-04-14T13:31:36.434950599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 14 13:31:39.704259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:39.718602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:39.748613 systemd[1]: Reloading requested from client PID 2044 ('systemctl') (unit session-7.scope)... Apr 14 13:31:39.748672 systemd[1]: Reloading... Apr 14 13:31:39.822035 zram_generator::config[2083]: No configuration found. Apr 14 13:31:39.986727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:40.049503 systemd[1]: Reloading finished in 299 ms. Apr 14 13:31:40.107857 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 13:31:40.107958 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 13:31:40.108161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:40.110736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:40.246504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:40.254026 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:31:40.347082 kubelet[2132]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:31:40.347082 kubelet[2132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:31:40.347546 kubelet[2132]: I0414 13:31:40.347275 2132 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:31:40.935500 kubelet[2132]: I0414 13:31:40.935391 2132 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 13:31:40.935500 kubelet[2132]: I0414 13:31:40.935450 2132 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:31:40.937438 kubelet[2132]: I0414 13:31:40.937368 2132 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 13:31:40.937438 kubelet[2132]: I0414 13:31:40.937414 2132 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:31:40.938098 kubelet[2132]: I0414 13:31:40.938022 2132 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:31:41.055503 kubelet[2132]: E0414 13:31:41.055407 2132 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:31:41.062128 kubelet[2132]: I0414 13:31:41.062024 2132 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:31:41.072807 kubelet[2132]: E0414 13:31:41.072589 2132 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:31:41.072807 kubelet[2132]: I0414 13:31:41.072800 2132 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 13:31:41.087802 kubelet[2132]: I0414 13:31:41.087494 2132 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 13:31:41.093303 kubelet[2132]: I0414 13:31:41.093122 2132 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:31:41.094056 kubelet[2132]: I0414 13:31:41.093307 2132 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 13:31:41.094198 kubelet[2132]: I0414 13:31:41.094061 2132 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:31:41.094198 kubelet[2132]: I0414 13:31:41.094074 2132 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 13:31:41.094254 kubelet[2132]: I0414 13:31:41.094235 2132 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 13:31:41.098037 kubelet[2132]: I0414 13:31:41.097967 2132 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:41.098161 kubelet[2132]: I0414 13:31:41.098146 2132 kubelet.go:475] "Attempting to sync node with API server" Apr 14 13:31:41.098161 kubelet[2132]: I0414 13:31:41.098157 2132 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:31:41.098272 kubelet[2132]: I0414 13:31:41.098174 2132 kubelet.go:387] "Adding apiserver pod source" Apr 14 13:31:41.098272 kubelet[2132]: I0414 13:31:41.098187 2132 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:31:41.099660 kubelet[2132]: E0414 13:31:41.099303 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:31:41.099944 kubelet[2132]: E0414 13:31:41.099853 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:31:41.101058 kubelet[2132]: I0414 13:31:41.100982 2132 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:31:41.103598 kubelet[2132]: I0414 13:31:41.103437 2132 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:31:41.103598 kubelet[2132]: I0414 13:31:41.103578 2132 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 13:31:41.104410 kubelet[2132]: W0414 13:31:41.103899 2132 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 13:31:41.110828 kubelet[2132]: I0414 13:31:41.110268 2132 server.go:1262] "Started kubelet" Apr 14 13:31:41.122658 kubelet[2132]: I0414 13:31:41.120376 2132 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:31:41.122658 kubelet[2132]: E0414 13:31:41.119778 2132 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63c63811f526e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:31:41.109981806 +0000 UTC m=+0.841744498,LastTimestamp:2026-04-14 13:31:41.109981806 +0000 UTC m=+0.841744498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:31:41.122658 kubelet[2132]: I0414 13:31:41.122217 2132 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:31:41.125119 kubelet[2132]: I0414 13:31:41.125094 2132 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 13:31:41.125359 kubelet[2132]: E0414 13:31:41.125337 2132 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:31:41.125488 kubelet[2132]: I0414 13:31:41.125472 2132 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 13:31:41.125536 kubelet[2132]: I0414 13:31:41.125519 2132 reconciler.go:29] "Reconciler: start to sync state" Apr 14 13:31:41.126585 kubelet[2132]: I0414 13:31:41.126529 2132 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:31:41.134885 kubelet[2132]: I0414 13:31:41.132087 2132 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:31:41.134885 kubelet[2132]: I0414 13:31:41.132468 2132 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:31:41.134885 kubelet[2132]: I0414 13:31:41.134695 2132 server.go:310] "Adding debug handlers to kubelet server" Apr 14 13:31:41.137134 kubelet[2132]: I0414 13:31:41.137060 2132 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:31:41.137278 kubelet[2132]: I0414 13:31:41.137176 2132 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 13:31:41.137712 kubelet[2132]: E0414 13:31:41.137603 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms" Apr 14 13:31:41.137798 kubelet[2132]: E0414 13:31:41.137781 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:31:41.137825 kubelet[2132]: E0414 13:31:41.137798 2132 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:31:41.138577 kubelet[2132]: I0414 13:31:41.138526 2132 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:31:41.147620 kubelet[2132]: I0414 13:31:41.147529 2132 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 13:31:41.152854 kubelet[2132]: I0414 13:31:41.152823 2132 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:31:41.188966 kubelet[2132]: I0414 13:31:41.186187 2132 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:31:41.188966 kubelet[2132]: I0414 13:31:41.186234 2132 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:31:41.188966 kubelet[2132]: I0414 13:31:41.187180 2132 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:41.240968 kubelet[2132]: E0414 13:31:41.240724 2132 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:31:41.248465 kubelet[2132]: I0414 13:31:41.247234 2132 policy_none.go:49] "None policy: Start" Apr 14 13:31:41.248728 kubelet[2132]: I0414 13:31:41.248510 2132 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 13:31:41.248728 kubelet[2132]: I0414 13:31:41.248621 2132 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 13:31:41.251935 kubelet[2132]: I0414 13:31:41.251860 2132 policy_none.go:47] "Start" Apr 14 13:31:41.361231 kubelet[2132]: E0414 13:31:41.360206 2132 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:31:41.363371 kubelet[2132]: E0414 13:31:41.362766 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms" Apr 14 13:31:41.363858 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 13:31:41.374578 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 13:31:41.378249 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 13:31:41.380450 kubelet[2132]: I0414 13:31:41.380353 2132 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 13:31:41.380450 kubelet[2132]: I0414 13:31:41.380441 2132 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 13:31:41.380530 kubelet[2132]: I0414 13:31:41.380468 2132 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 13:31:41.380663 kubelet[2132]: E0414 13:31:41.380603 2132 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:31:41.381602 kubelet[2132]: E0414 13:31:41.381573 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:31:41.384435 kubelet[2132]: E0414 13:31:41.384382 2132 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:31:41.384695 kubelet[2132]: I0414 13:31:41.384587 2132 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:31:41.384695 kubelet[2132]: I0414 13:31:41.384597 2132 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:31:41.385064 kubelet[2132]: I0414 13:31:41.385033 2132 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:31:41.386158 kubelet[2132]: E0414 13:31:41.386011 2132 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:31:41.386158 kubelet[2132]: E0414 13:31:41.386059 2132 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:31:41.486188 kubelet[2132]: I0414 13:31:41.486052 2132 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:41.487545 kubelet[2132]: E0414 13:31:41.487493 2132 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 13:31:41.498613 systemd[1]: Created slice kubepods-burstable-pod627dfa7d6346064e33172ea03aabee25.slice - libcontainer container kubepods-burstable-pod627dfa7d6346064e33172ea03aabee25.slice. Apr 14 13:31:41.529877 kubelet[2132]: E0414 13:31:41.529828 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:41.550467 systemd[1]: Created slice kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice - libcontainer container kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice. Apr 14 13:31:41.562268 kubelet[2132]: I0414 13:31:41.562188 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:41.562268 kubelet[2132]: I0414 13:31:41.562240 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:41.562447 kubelet[2132]: I0414 13:31:41.562324 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/627dfa7d6346064e33172ea03aabee25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"627dfa7d6346064e33172ea03aabee25\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:41.562447 kubelet[2132]: I0414 13:31:41.562383 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/627dfa7d6346064e33172ea03aabee25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"627dfa7d6346064e33172ea03aabee25\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:41.562447 kubelet[2132]: I0414 13:31:41.562407 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:41.562713 kubelet[2132]: I0414 13:31:41.562447 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:41.562713 kubelet[2132]: I0414 13:31:41.562496 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/627dfa7d6346064e33172ea03aabee25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"627dfa7d6346064e33172ea03aabee25\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:41.562713 kubelet[2132]: I0414 13:31:41.562607 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:41.562713 kubelet[2132]: I0414 13:31:41.562692 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:41.567160 kubelet[2132]: E0414 13:31:41.567119 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:41.569558 systemd[1]: Created slice kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice - libcontainer container kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice. Apr 14 13:31:41.571428 kubelet[2132]: E0414 13:31:41.571395 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:41.694554 kubelet[2132]: I0414 13:31:41.694476 2132 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:41.695785 kubelet[2132]: E0414 13:31:41.695664 2132 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 13:31:41.768562 kubelet[2132]: E0414 13:31:41.767600 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms" Apr 14 13:31:41.865711 kubelet[2132]: E0414 13:31:41.865530 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:41.868527 containerd[1458]: time="2026-04-14T13:31:41.868475514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:627dfa7d6346064e33172ea03aabee25,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:41.871902 kubelet[2132]: E0414 13:31:41.871471 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:41.872261 containerd[1458]: time="2026-04-14T13:31:41.872196188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:41.874088 kubelet[2132]: E0414 13:31:41.874060 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:41.874623 containerd[1458]: time="2026-04-14T13:31:41.874539809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:42.097697 kubelet[2132]: I0414 13:31:42.097547 2132 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:42.098308 kubelet[2132]: E0414 13:31:42.098070 2132 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 13:31:42.315503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422951168.mount: Deactivated successfully. Apr 14 13:31:42.325348 containerd[1458]: time="2026-04-14T13:31:42.325286324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:42.326438 containerd[1458]: time="2026-04-14T13:31:42.326377303Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:42.326973 containerd[1458]: time="2026-04-14T13:31:42.326931398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:31:42.327869 containerd[1458]: time="2026-04-14T13:31:42.327831835Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:42.329276 containerd[1458]: time="2026-04-14T13:31:42.329110640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 13:31:42.330100 containerd[1458]: time="2026-04-14T13:31:42.330031658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:31:42.330938 containerd[1458]: time="2026-04-14T13:31:42.330854050Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:42.336004 containerd[1458]: time="2026-04-14T13:31:42.335939374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:42.336682 containerd[1458]: time="2026-04-14T13:31:42.336611806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.928686ms" Apr 14 13:31:42.337073 containerd[1458]: time="2026-04-14T13:31:42.337046328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.776171ms" Apr 14 13:31:42.342770 containerd[1458]: time="2026-04-14T13:31:42.342681334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.138239ms" Apr 14 13:31:42.669197 kubelet[2132]: E0414 13:31:42.468615 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:31:42.674099 kubelet[2132]: E0414 13:31:42.674042 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="1.6s" Apr 14 13:31:42.674267 kubelet[2132]: E0414 13:31:42.674251 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:31:42.674395 kubelet[2132]: E0414 13:31:42.674323 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:31:42.930988 kubelet[2132]: E0414 13:31:42.929852 2132 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:31:42.930988 kubelet[2132]: I0414 13:31:42.929871 2132 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:42.932346 kubelet[2132]: E0414 13:31:42.932291 2132 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Apr 14 13:31:43.005138 containerd[1458]: time="2026-04-14T13:31:43.002743458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:43.005138 containerd[1458]: time="2026-04-14T13:31:43.002824144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:43.005138 containerd[1458]: time="2026-04-14T13:31:43.003682537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:43.005138 containerd[1458]: time="2026-04-14T13:31:43.003803361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:43.007054 containerd[1458]: time="2026-04-14T13:31:43.006899674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:43.007054 containerd[1458]: time="2026-04-14T13:31:43.006975577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:43.007054 containerd[1458]: time="2026-04-14T13:31:43.007007727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:43.009726 containerd[1458]: time="2026-04-14T13:31:43.009469620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:43.009726 containerd[1458]: time="2026-04-14T13:31:43.009588186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:43.009885 containerd[1458]: time="2026-04-14T13:31:43.009767376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:43.009885 containerd[1458]: time="2026-04-14T13:31:43.009831906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:43.012882 containerd[1458]: time="2026-04-14T13:31:43.010564240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:43.190375 systemd[1]: Started cri-containerd-223ca9f54e6ad36cfaf690fc0e6fd2b67b1b64bd9c3305c72ecde2b965fa8afe.scope - libcontainer container 223ca9f54e6ad36cfaf690fc0e6fd2b67b1b64bd9c3305c72ecde2b965fa8afe. Apr 14 13:31:43.257202 systemd[1]: Started cri-containerd-6fece54f6e8820329e9c676a0d4449e338bd15bee7d88f00acdafcb67a52458d.scope - libcontainer container 6fece54f6e8820329e9c676a0d4449e338bd15bee7d88f00acdafcb67a52458d. Apr 14 13:31:43.258673 kubelet[2132]: E0414 13:31:43.258138 2132 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:31:43.268007 systemd[1]: Started cri-containerd-129518564a031019f44bb96ad326edc3b8ec5e9422fe0a8f670a583a4bb16014.scope - libcontainer container 129518564a031019f44bb96ad326edc3b8ec5e9422fe0a8f670a583a4bb16014. Apr 14 13:31:43.345084 containerd[1458]: time="2026-04-14T13:31:43.342107993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fece54f6e8820329e9c676a0d4449e338bd15bee7d88f00acdafcb67a52458d\"" Apr 14 13:31:43.345289 kubelet[2132]: E0414 13:31:43.343224 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:43.364020 containerd[1458]: time="2026-04-14T13:31:43.361359735Z" level=info msg="CreateContainer within sandbox \"6fece54f6e8820329e9c676a0d4449e338bd15bee7d88f00acdafcb67a52458d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 13:31:43.523147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195018248.mount: Deactivated successfully. Apr 14 13:31:43.569133 containerd[1458]: time="2026-04-14T13:31:43.568720161Z" level=info msg="CreateContainer within sandbox \"6fece54f6e8820329e9c676a0d4449e338bd15bee7d88f00acdafcb67a52458d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc4d33f6378c5cbf4628d7dc34fdee9514291cb98da4e2db23d42638b9311ba4\"" Apr 14 13:31:43.570449 containerd[1458]: time="2026-04-14T13:31:43.570417444Z" level=info msg="StartContainer for \"bc4d33f6378c5cbf4628d7dc34fdee9514291cb98da4e2db23d42638b9311ba4\"" Apr 14 13:31:43.578953 containerd[1458]: time="2026-04-14T13:31:43.578541374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:627dfa7d6346064e33172ea03aabee25,Namespace:kube-system,Attempt:0,} returns sandbox id \"223ca9f54e6ad36cfaf690fc0e6fd2b67b1b64bd9c3305c72ecde2b965fa8afe\"" Apr 14 13:31:43.579425 kubelet[2132]: E0414 13:31:43.579403 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:43.587673 containerd[1458]: time="2026-04-14T13:31:43.587314882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,} returns sandbox id \"129518564a031019f44bb96ad326edc3b8ec5e9422fe0a8f670a583a4bb16014\"" Apr 14 13:31:43.587673 containerd[1458]: time="2026-04-14T13:31:43.587516611Z" level=info msg="CreateContainer within sandbox \"223ca9f54e6ad36cfaf690fc0e6fd2b67b1b64bd9c3305c72ecde2b965fa8afe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 13:31:43.590562 kubelet[2132]: E0414 13:31:43.590279 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:43.597583 containerd[1458]: time="2026-04-14T13:31:43.597516541Z" level=info msg="CreateContainer within sandbox \"129518564a031019f44bb96ad326edc3b8ec5e9422fe0a8f670a583a4bb16014\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 13:31:43.616821 containerd[1458]: time="2026-04-14T13:31:43.616745948Z" level=info msg="CreateContainer within sandbox \"223ca9f54e6ad36cfaf690fc0e6fd2b67b1b64bd9c3305c72ecde2b965fa8afe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1538d6db81aeadd00d17553fe5f12cdaf93371cfca4d8b51228d4d5d55534b39\"" Apr 14 13:31:43.618260 containerd[1458]: time="2026-04-14T13:31:43.618216339Z" level=info msg="StartContainer for \"1538d6db81aeadd00d17553fe5f12cdaf93371cfca4d8b51228d4d5d55534b39\"" Apr 14 13:31:43.620979 containerd[1458]: time="2026-04-14T13:31:43.619342719Z" level=info msg="CreateContainer within sandbox \"129518564a031019f44bb96ad326edc3b8ec5e9422fe0a8f670a583a4bb16014\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36dd23f16313ca20c7105ea94c7d845f5063e173df67e312fc6b3448a32f2788\"" Apr 14 13:31:43.620979 containerd[1458]: time="2026-04-14T13:31:43.620752873Z" level=info msg="StartContainer for \"36dd23f16313ca20c7105ea94c7d845f5063e173df67e312fc6b3448a32f2788\"" Apr 14 13:31:43.631235 systemd[1]: Started cri-containerd-bc4d33f6378c5cbf4628d7dc34fdee9514291cb98da4e2db23d42638b9311ba4.scope - libcontainer container bc4d33f6378c5cbf4628d7dc34fdee9514291cb98da4e2db23d42638b9311ba4. Apr 14 13:31:43.665234 systemd[1]: Started cri-containerd-1538d6db81aeadd00d17553fe5f12cdaf93371cfca4d8b51228d4d5d55534b39.scope - libcontainer container 1538d6db81aeadd00d17553fe5f12cdaf93371cfca4d8b51228d4d5d55534b39. Apr 14 13:31:43.695575 systemd[1]: Started cri-containerd-36dd23f16313ca20c7105ea94c7d845f5063e173df67e312fc6b3448a32f2788.scope - libcontainer container 36dd23f16313ca20c7105ea94c7d845f5063e173df67e312fc6b3448a32f2788. Apr 14 13:31:43.699426 containerd[1458]: time="2026-04-14T13:31:43.699375358Z" level=info msg="StartContainer for \"bc4d33f6378c5cbf4628d7dc34fdee9514291cb98da4e2db23d42638b9311ba4\" returns successfully" Apr 14 13:31:43.851486 containerd[1458]: time="2026-04-14T13:31:43.850993241Z" level=info msg="StartContainer for \"1538d6db81aeadd00d17553fe5f12cdaf93371cfca4d8b51228d4d5d55534b39\" returns successfully" Apr 14 13:31:43.956567 containerd[1458]: time="2026-04-14T13:31:43.956462123Z" level=info msg="StartContainer for \"36dd23f16313ca20c7105ea94c7d845f5063e173df67e312fc6b3448a32f2788\" returns successfully" Apr 14 13:31:44.446474 kubelet[2132]: E0414 13:31:44.446357 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:44.446868 kubelet[2132]: E0414 13:31:44.446694 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:44.453967 kubelet[2132]: E0414 13:31:44.453324 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:44.453967 kubelet[2132]: E0414 13:31:44.453481 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:44.454460 kubelet[2132]: E0414 13:31:44.454201 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:44.456964 kubelet[2132]: E0414 13:31:44.456265 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:44.563951 kubelet[2132]: I0414 13:31:44.562312 2132 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:45.458061 kubelet[2132]: E0414 13:31:45.457982 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:45.470698 kubelet[2132]: E0414 13:31:45.470614 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:45.483822 kubelet[2132]: E0414 13:31:45.483768 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:45.484082 kubelet[2132]: E0414 13:31:45.484029 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:45.485869 kubelet[2132]: E0414 13:31:45.485602 2132 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:45.486620 kubelet[2132]: E0414 13:31:45.486577 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:50.488643 kubelet[2132]: E0414 13:31:50.488550 2132 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 13:31:50.532562 kubelet[2132]: E0414 13:31:50.532286 2132 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a63c63811f526e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:31:41.109981806 +0000 UTC m=+0.841744498,LastTimestamp:2026-04-14 13:31:41.109981806 +0000 UTC m=+0.841744498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:31:50.591992 kubelet[2132]: I0414 13:31:50.590595 2132 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:31:50.606898 kubelet[2132]: E0414 13:31:50.606780 2132 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a63c6382c76ae1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:31:41.137775329 +0000 UTC m=+0.869538046,LastTimestamp:2026-04-14 13:31:41.137775329 +0000 UTC m=+0.869538046,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:31:50.626326 kubelet[2132]: I0414 13:31:50.626247 2132 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:50.819884 kubelet[2132]: E0414 13:31:50.819643 2132 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:50.820422 kubelet[2132]: I0414 13:31:50.820206 2132 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:50.826082 kubelet[2132]: E0414 13:31:50.826023 2132 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:50.826082 kubelet[2132]: I0414 13:31:50.826071 2132 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:50.828547 kubelet[2132]: E0414 13:31:50.828492 2132 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:51.067136 kubelet[2132]: I0414 13:31:51.067057 2132 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:51.074603 kubelet[2132]: E0414 13:31:51.074444 2132 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:51.075118 kubelet[2132]: E0414 13:31:51.074661 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:51.312958 kubelet[2132]: I0414 13:31:51.310243 2132 apiserver.go:52] "Watching apiserver" Apr 14 13:31:51.329996 kubelet[2132]: I0414 13:31:51.329752 2132 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 13:31:52.502311 kubelet[2132]: I0414 13:31:52.502027 2132 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.512339 kubelet[2132]: E0414 13:31:52.512296 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:53.514494 kubelet[2132]: E0414 13:31:53.514440 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:54.230215 kubelet[2132]: I0414 13:31:54.230181 2132 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:54.250331 kubelet[2132]: E0414 13:31:54.250047 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:54.256874 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-7.scope)... Apr 14 13:31:54.256887 systemd[1]: Reloading... Apr 14 13:31:54.398973 kubelet[2132]: I0414 13:31:54.398642 2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.398626065 podStartE2EDuration="2.398626065s" podCreationTimestamp="2026-04-14 13:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:31:54.397944655 +0000 UTC m=+14.129707350" watchObservedRunningTime="2026-04-14 13:31:54.398626065 +0000 UTC m=+14.130388767" Apr 14 13:31:54.431042 zram_generator::config[2463]: No configuration found. Apr 14 13:31:54.516449 kubelet[2132]: E0414 13:31:54.516318 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:54.541790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:54.618615 systemd[1]: Reloading finished in 360 ms. Apr 14 13:31:54.661435 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:54.677538 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:31:54.678263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:54.678323 systemd[1]: kubelet.service: Consumed 3.532s CPU time, 129.6M memory peak, 0B memory swap peak. Apr 14 13:31:54.688213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:54.986023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:54.986633 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:31:55.333610 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:31:55.333610 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:31:55.333610 kubelet[2512]: I0414 13:31:55.333243 2512 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:31:55.390499 kubelet[2512]: I0414 13:31:55.389815 2512 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 13:31:55.390499 kubelet[2512]: I0414 13:31:55.389850 2512 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:31:55.390499 kubelet[2512]: I0414 13:31:55.389877 2512 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 13:31:55.390499 kubelet[2512]: I0414 13:31:55.389884 2512 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:31:55.392493 kubelet[2512]: I0414 13:31:55.392416 2512 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:31:55.394722 kubelet[2512]: I0414 13:31:55.394632 2512 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 13:31:55.398693 kubelet[2512]: I0414 13:31:55.398591 2512 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:31:55.454177 kubelet[2512]: E0414 13:31:55.454099 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:31:55.454753 kubelet[2512]: I0414 13:31:55.454489 2512 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 13:31:55.470824 kubelet[2512]: I0414 13:31:55.470782 2512 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 13:31:55.472158 kubelet[2512]: I0414 13:31:55.471853 2512 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:31:55.472294 kubelet[2512]: I0414 13:31:55.471994 2512 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 13:31:55.472294 kubelet[2512]: I0414 13:31:55.472231 2512 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:31:55.472294 kubelet[2512]: I0414 13:31:55.472258 2512 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 13:31:55.472294 kubelet[2512]: I0414 13:31:55.472281 2512 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 13:31:55.472561 kubelet[2512]: I0414 13:31:55.472484 2512 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:55.472648 kubelet[2512]: I0414 13:31:55.472636 2512 kubelet.go:475] "Attempting to sync node with API server" Apr 14 13:31:55.472713 kubelet[2512]: I0414 13:31:55.472654 2512 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:31:55.472713 kubelet[2512]: I0414 13:31:55.472697 2512 kubelet.go:387] "Adding apiserver pod source" Apr 14 13:31:55.472746 kubelet[2512]: I0414 13:31:55.472716 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:31:55.475021 kubelet[2512]: I0414 13:31:55.474847 2512 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:31:55.476958 kubelet[2512]: I0414 13:31:55.475819 2512 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:31:55.476958 kubelet[2512]: I0414 13:31:55.475845 2512 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 13:31:55.499007 kubelet[2512]: I0414 13:31:55.496877 2512 server.go:1262] "Started kubelet" Apr 14 13:31:55.507343 kubelet[2512]: I0414 13:31:55.506117 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:31:55.508798 kubelet[2512]: I0414 13:31:55.508389 2512 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:31:55.509979 kubelet[2512]: I0414 13:31:55.509844 2512 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:31:55.512287 kubelet[2512]: I0414 13:31:55.511798 2512 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 13:31:55.512287 kubelet[2512]: E0414 13:31:55.512125 2512 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:31:55.512287 kubelet[2512]: I0414 13:31:55.512170 2512 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:31:55.512287 kubelet[2512]: I0414 13:31:55.511618 2512 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 13:31:55.512287 kubelet[2512]: I0414 13:31:55.510033 2512 server.go:310] "Adding debug handlers to kubelet server" Apr 14 13:31:55.513702 kubelet[2512]: I0414 13:31:55.511609 2512 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 13:31:55.514164 kubelet[2512]: I0414 13:31:55.511176 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:31:55.527855 kubelet[2512]: I0414 13:31:55.527038 2512 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:31:55.527855 kubelet[2512]: I0414 13:31:55.527230 2512 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:31:55.528350 kubelet[2512]: I0414 13:31:55.528232 2512 reconciler.go:29] "Reconciler: start to sync state" Apr 14 13:31:55.530424 kubelet[2512]: E0414 13:31:55.530341 2512 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:31:55.533807 kubelet[2512]: I0414 13:31:55.532145 2512 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:31:55.575173 kubelet[2512]: I0414 13:31:55.573053 2512 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 13:31:55.577105 kubelet[2512]: I0414 13:31:55.577059 2512 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 13:31:55.577105 kubelet[2512]: I0414 13:31:55.577099 2512 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 13:31:55.577191 kubelet[2512]: I0414 13:31:55.577120 2512 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 13:31:55.577252 kubelet[2512]: E0414 13:31:55.577195 2512 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:31:55.672694 kubelet[2512]: I0414 13:31:55.672470 2512 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:31:55.672694 kubelet[2512]: I0414 13:31:55.672502 2512 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:31:55.672694 kubelet[2512]: I0414 13:31:55.672520 2512 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:55.672694 kubelet[2512]: I0414 13:31:55.672693 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 13:31:55.673145 kubelet[2512]: I0414 13:31:55.672704 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 13:31:55.673145 kubelet[2512]: I0414 13:31:55.672720 2512 policy_none.go:49] "None policy: Start" Apr 14 13:31:55.673145 kubelet[2512]: I0414 13:31:55.672729 2512 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 13:31:55.673145 kubelet[2512]: I0414 13:31:55.672736 2512 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 13:31:55.673145 kubelet[2512]: I0414 13:31:55.672848 2512 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 14 13:31:55.673145 kubelet[2512]: I0414 13:31:55.672857 2512 policy_none.go:47] "Start" Apr 14 13:31:55.678012 kubelet[2512]: E0414 13:31:55.677799 2512 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:31:55.694215 kubelet[2512]: E0414 13:31:55.692805 2512 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:31:55.696208 kubelet[2512]: I0414 13:31:55.696147 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:31:55.696208 kubelet[2512]: I0414 13:31:55.696189 2512 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:31:55.697064 kubelet[2512]: I0414 13:31:55.697017 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:31:55.704636 kubelet[2512]: E0414 13:31:55.704457 2512 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:31:55.880741 kubelet[2512]: I0414 13:31:55.880649 2512 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:55.881642 kubelet[2512]: I0414 13:31:55.881607 2512 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:55.882045 kubelet[2512]: I0414 13:31:55.882019 2512 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.882386 kubelet[2512]: I0414 13:31:55.882365 2512 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:55.932708 kubelet[2512]: E0414 13:31:55.932520 2512 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:55.940833 kubelet[2512]: I0414 13:31:55.940503 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.944615 kubelet[2512]: E0414 13:31:55.942657 2512 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.944615 kubelet[2512]: I0414 13:31:55.943642 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:55.944615 kubelet[2512]: I0414 13:31:55.944287 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/627dfa7d6346064e33172ea03aabee25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"627dfa7d6346064e33172ea03aabee25\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:55.944615 kubelet[2512]: I0414 13:31:55.944373 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/627dfa7d6346064e33172ea03aabee25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"627dfa7d6346064e33172ea03aabee25\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:55.944615 kubelet[2512]: I0414 13:31:55.944401 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/627dfa7d6346064e33172ea03aabee25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"627dfa7d6346064e33172ea03aabee25\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:55.944615 kubelet[2512]: I0414 13:31:55.944449 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.945100 kubelet[2512]: I0414 13:31:55.944470 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.945100 kubelet[2512]: I0414 13:31:55.944500 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.945100 kubelet[2512]: I0414 13:31:55.944525 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:55.952139 kubelet[2512]: I0414 13:31:55.951769 2512 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 13:31:55.952637 kubelet[2512]: I0414 13:31:55.952469 2512 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:31:56.233579 kubelet[2512]: E0414 13:31:56.233430 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.239340 kubelet[2512]: E0414 13:31:56.236796 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.246965 kubelet[2512]: E0414 13:31:56.246864 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.475968 kubelet[2512]: I0414 13:31:56.475140 2512 apiserver.go:52] "Watching apiserver" Apr 14 13:31:56.523181 kubelet[2512]: I0414 13:31:56.519731 2512 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 13:31:56.751281 kubelet[2512]: E0414 13:31:56.750756 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.764528 kubelet[2512]: E0414 13:31:56.758433 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.793243 kubelet[2512]: E0414 13:31:56.793073 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.890143 kubelet[2512]: I0414 13:31:56.890003 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.889987225 podStartE2EDuration="1.889987225s" podCreationTimestamp="2026-04-14 13:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:31:56.889695226 +0000 UTC m=+1.726027892" watchObservedRunningTime="2026-04-14 13:31:56.889987225 +0000 UTC m=+1.726319885" Apr 14 13:31:57.742548 kubelet[2512]: E0414 13:31:57.742462 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:57.744177 kubelet[2512]: E0414 13:31:57.743500 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:59.010173 kubelet[2512]: I0414 13:31:59.010092 2512 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 13:31:59.011181 containerd[1458]: time="2026-04-14T13:31:59.011137547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 13:31:59.011413 kubelet[2512]: I0414 13:31:59.011372 2512 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 13:31:59.713171 systemd[1]: Created slice kubepods-besteffort-pod7244eee5_0693_4f47_b9cc_98a55dd663fa.slice - libcontainer container kubepods-besteffort-pod7244eee5_0693_4f47_b9cc_98a55dd663fa.slice. Apr 14 13:31:59.766249 kubelet[2512]: I0414 13:31:59.766157 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7244eee5-0693-4f47-b9cc-98a55dd663fa-xtables-lock\") pod \"kube-proxy-jkggq\" (UID: \"7244eee5-0693-4f47-b9cc-98a55dd663fa\") " pod="kube-system/kube-proxy-jkggq" Apr 14 13:31:59.766249 kubelet[2512]: I0414 13:31:59.766219 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4nkj\" (UniqueName: \"kubernetes.io/projected/7244eee5-0693-4f47-b9cc-98a55dd663fa-kube-api-access-f4nkj\") pod \"kube-proxy-jkggq\" (UID: \"7244eee5-0693-4f47-b9cc-98a55dd663fa\") " pod="kube-system/kube-proxy-jkggq" Apr 14 13:31:59.767279 kubelet[2512]: I0414 13:31:59.766327 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7244eee5-0693-4f47-b9cc-98a55dd663fa-kube-proxy\") pod \"kube-proxy-jkggq\" (UID: \"7244eee5-0693-4f47-b9cc-98a55dd663fa\") " pod="kube-system/kube-proxy-jkggq" Apr 14 13:31:59.767279 kubelet[2512]: I0414 13:31:59.766346 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7244eee5-0693-4f47-b9cc-98a55dd663fa-lib-modules\") pod \"kube-proxy-jkggq\" (UID: \"7244eee5-0693-4f47-b9cc-98a55dd663fa\") " pod="kube-system/kube-proxy-jkggq" Apr 14 13:31:59.885498 kubelet[2512]: E0414 13:31:59.885392 2512 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 14 13:31:59.885498 kubelet[2512]: E0414 13:31:59.885468 2512 projected.go:196] Error preparing data for projected volume kube-api-access-f4nkj for pod kube-system/kube-proxy-jkggq: configmap "kube-root-ca.crt" not found Apr 14 13:31:59.885782 kubelet[2512]: E0414 13:31:59.885616 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7244eee5-0693-4f47-b9cc-98a55dd663fa-kube-api-access-f4nkj podName:7244eee5-0693-4f47-b9cc-98a55dd663fa nodeName:}" failed. No retries permitted until 2026-04-14 13:32:00.385557089 +0000 UTC m=+5.221889749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f4nkj" (UniqueName: "kubernetes.io/projected/7244eee5-0693-4f47-b9cc-98a55dd663fa-kube-api-access-f4nkj") pod "kube-proxy-jkggq" (UID: "7244eee5-0693-4f47-b9cc-98a55dd663fa") : configmap "kube-root-ca.crt" not found Apr 14 13:32:00.169713 kubelet[2512]: I0414 13:32:00.169559 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08c84ef7-da46-46fb-aefe-4fdac842ee53-var-lib-calico\") pod \"tigera-operator-5588576f44-s5zlk\" (UID: \"08c84ef7-da46-46fb-aefe-4fdac842ee53\") " pod="tigera-operator/tigera-operator-5588576f44-s5zlk" Apr 14 13:32:00.169713 kubelet[2512]: I0414 13:32:00.169648 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsr6\" (UniqueName: \"kubernetes.io/projected/08c84ef7-da46-46fb-aefe-4fdac842ee53-kube-api-access-nzsr6\") pod \"tigera-operator-5588576f44-s5zlk\" (UID: \"08c84ef7-da46-46fb-aefe-4fdac842ee53\") " pod="tigera-operator/tigera-operator-5588576f44-s5zlk" Apr 14 13:32:00.183895 systemd[1]: Created slice kubepods-besteffort-pod08c84ef7_da46_46fb_aefe_4fdac842ee53.slice - libcontainer container kubepods-besteffort-pod08c84ef7_da46_46fb_aefe_4fdac842ee53.slice. Apr 14 13:32:00.492688 containerd[1458]: time="2026-04-14T13:32:00.492505605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-s5zlk,Uid:08c84ef7-da46-46fb-aefe-4fdac842ee53,Namespace:tigera-operator,Attempt:0,}" Apr 14 13:32:00.539799 containerd[1458]: time="2026-04-14T13:32:00.539448819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:00.539799 containerd[1458]: time="2026-04-14T13:32:00.539649316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:00.539799 containerd[1458]: time="2026-04-14T13:32:00.539714813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:00.541217 containerd[1458]: time="2026-04-14T13:32:00.540431416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:00.576269 systemd[1]: Started cri-containerd-e2458bbecf7a4a08503db900b08998d16ee5840d34230ac8bc5e367aefec122f.scope - libcontainer container e2458bbecf7a4a08503db900b08998d16ee5840d34230ac8bc5e367aefec122f. Apr 14 13:32:00.626267 containerd[1458]: time="2026-04-14T13:32:00.625843664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-s5zlk,Uid:08c84ef7-da46-46fb-aefe-4fdac842ee53,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e2458bbecf7a4a08503db900b08998d16ee5840d34230ac8bc5e367aefec122f\"" Apr 14 13:32:00.629347 containerd[1458]: time="2026-04-14T13:32:00.628691797Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 13:32:00.631362 kubelet[2512]: E0414 13:32:00.631201 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:00.632090 containerd[1458]: time="2026-04-14T13:32:00.632068521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jkggq,Uid:7244eee5-0693-4f47-b9cc-98a55dd663fa,Namespace:kube-system,Attempt:0,}" Apr 14 13:32:00.665093 containerd[1458]: time="2026-04-14T13:32:00.664659042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:00.665093 containerd[1458]: time="2026-04-14T13:32:00.664715031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:00.665093 containerd[1458]: time="2026-04-14T13:32:00.664739201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:00.665666 containerd[1458]: time="2026-04-14T13:32:00.665596607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:00.691561 systemd[1]: Started cri-containerd-1ac75ed77d24ad7c8d522874ab4576fc7a3e4b0f6c0c4bae74a3ee9126929a0a.scope - libcontainer container 1ac75ed77d24ad7c8d522874ab4576fc7a3e4b0f6c0c4bae74a3ee9126929a0a. Apr 14 13:32:00.728735 containerd[1458]: time="2026-04-14T13:32:00.728572775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jkggq,Uid:7244eee5-0693-4f47-b9cc-98a55dd663fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ac75ed77d24ad7c8d522874ab4576fc7a3e4b0f6c0c4bae74a3ee9126929a0a\"" Apr 14 13:32:00.733294 kubelet[2512]: E0414 13:32:00.733255 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:00.767450 containerd[1458]: time="2026-04-14T13:32:00.767049791Z" level=info msg="CreateContainer within sandbox \"1ac75ed77d24ad7c8d522874ab4576fc7a3e4b0f6c0c4bae74a3ee9126929a0a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 13:32:00.793986 containerd[1458]: time="2026-04-14T13:32:00.793201856Z" level=info msg="CreateContainer within sandbox \"1ac75ed77d24ad7c8d522874ab4576fc7a3e4b0f6c0c4bae74a3ee9126929a0a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3cc522bfb77e3b2c45ecc67814fb79e2150cfb99a7aa95bc2d717afbac0fad88\"" Apr 14 13:32:00.797844 containerd[1458]: time="2026-04-14T13:32:00.797476090Z" level=info msg="StartContainer for \"3cc522bfb77e3b2c45ecc67814fb79e2150cfb99a7aa95bc2d717afbac0fad88\"" Apr 14 13:32:00.876525 kubelet[2512]: E0414 13:32:00.876486 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:00.931952 systemd[1]: Started cri-containerd-3cc522bfb77e3b2c45ecc67814fb79e2150cfb99a7aa95bc2d717afbac0fad88.scope - libcontainer container 3cc522bfb77e3b2c45ecc67814fb79e2150cfb99a7aa95bc2d717afbac0fad88. Apr 14 13:32:00.973513 containerd[1458]: time="2026-04-14T13:32:00.973426332Z" level=info msg="StartContainer for \"3cc522bfb77e3b2c45ecc67814fb79e2150cfb99a7aa95bc2d717afbac0fad88\" returns successfully" Apr 14 13:32:01.785981 kubelet[2512]: E0414 13:32:01.785894 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:01.786715 kubelet[2512]: E0414 13:32:01.786557 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:02.051567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004473361.mount: Deactivated successfully. Apr 14 13:32:02.801588 kubelet[2512]: E0414 13:32:02.801466 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:03.059968 containerd[1458]: time="2026-04-14T13:32:03.059499905Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:03.060844 containerd[1458]: time="2026-04-14T13:32:03.060799987Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 13:32:03.062563 containerd[1458]: time="2026-04-14T13:32:03.062450057Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:03.066003 containerd[1458]: time="2026-04-14T13:32:03.065960627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:03.066541 containerd[1458]: time="2026-04-14T13:32:03.066510180Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.43778352s" Apr 14 13:32:03.066567 containerd[1458]: time="2026-04-14T13:32:03.066549028Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 13:32:03.074618 containerd[1458]: time="2026-04-14T13:32:03.074480794Z" level=info msg="CreateContainer within sandbox \"e2458bbecf7a4a08503db900b08998d16ee5840d34230ac8bc5e367aefec122f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 13:32:03.091501 containerd[1458]: time="2026-04-14T13:32:03.091319879Z" level=info msg="CreateContainer within sandbox \"e2458bbecf7a4a08503db900b08998d16ee5840d34230ac8bc5e367aefec122f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8d2aaecf9b26af2a67d6babb3c530e15b9839318fbde4521655a1016ef9dd4d\"" Apr 14 13:32:03.095792 containerd[1458]: time="2026-04-14T13:32:03.094420555Z" level=info msg="StartContainer for \"c8d2aaecf9b26af2a67d6babb3c530e15b9839318fbde4521655a1016ef9dd4d\"" Apr 14 13:32:03.129285 systemd[1]: run-containerd-runc-k8s.io-c8d2aaecf9b26af2a67d6babb3c530e15b9839318fbde4521655a1016ef9dd4d-runc.RKIjSO.mount: Deactivated successfully. Apr 14 13:32:03.140441 systemd[1]: Started cri-containerd-c8d2aaecf9b26af2a67d6babb3c530e15b9839318fbde4521655a1016ef9dd4d.scope - libcontainer container c8d2aaecf9b26af2a67d6babb3c530e15b9839318fbde4521655a1016ef9dd4d. Apr 14 13:32:03.174414 containerd[1458]: time="2026-04-14T13:32:03.174333434Z" level=info msg="StartContainer for \"c8d2aaecf9b26af2a67d6babb3c530e15b9839318fbde4521655a1016ef9dd4d\" returns successfully" Apr 14 13:32:03.897426 kubelet[2512]: I0414 13:32:03.896812 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-s5zlk" podStartSLOduration=1.457129052 podStartE2EDuration="3.896785386s" podCreationTimestamp="2026-04-14 13:32:00 +0000 UTC" firstStartedPulling="2026-04-14 13:32:00.628314681 +0000 UTC m=+5.464647341" lastFinishedPulling="2026-04-14 13:32:03.067971016 +0000 UTC m=+7.904303675" observedRunningTime="2026-04-14 13:32:03.89331643 +0000 UTC m=+8.729649105" watchObservedRunningTime="2026-04-14 13:32:03.896785386 +0000 UTC m=+8.733118056" Apr 14 13:32:03.897426 kubelet[2512]: I0414 13:32:03.897256 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jkggq" podStartSLOduration=4.897211678 podStartE2EDuration="4.897211678s" podCreationTimestamp="2026-04-14 13:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:32:01.88695779 +0000 UTC m=+6.723290457" watchObservedRunningTime="2026-04-14 13:32:03.897211678 +0000 UTC m=+8.733544364" Apr 14 13:32:03.929833 kubelet[2512]: E0414 13:32:03.929592 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:04.811717 kubelet[2512]: E0414 13:32:04.811652 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:05.818633 kubelet[2512]: E0414 13:32:05.818530 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:05.920041 kubelet[2512]: E0414 13:32:05.919985 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:06.253685 update_engine[1449]: I20260414 13:32:06.252998 1449 update_attempter.cc:509] Updating boot flags... Apr 14 13:32:06.334659 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2878) Apr 14 13:32:06.412894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2881) Apr 14 13:32:06.820286 kubelet[2512]: E0414 13:32:06.820200 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:07.844453 kubelet[2512]: E0414 13:32:07.844356 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:10.653317 sudo[1636]: pam_unix(sudo:session): session closed for user root Apr 14 13:32:10.667042 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 14 13:32:10.687448 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:53790.service: Deactivated successfully. Apr 14 13:32:10.706345 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 13:32:10.706790 systemd[1]: session-7.scope: Consumed 10.102s CPU time, 157.4M memory peak, 0B memory swap peak. Apr 14 13:32:10.722188 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Apr 14 13:32:10.730334 systemd-logind[1445]: Removed session 7. Apr 14 13:32:23.255470 systemd[1]: Created slice kubepods-besteffort-podffc8f313_fd3f_415d_bc34_4f85e818a562.slice - libcontainer container kubepods-besteffort-podffc8f313_fd3f_415d_bc34_4f85e818a562.slice. Apr 14 13:32:23.278420 kubelet[2512]: I0414 13:32:23.278376 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ffc8f313-fd3f-415d-bc34-4f85e818a562-typha-certs\") pod \"calico-typha-8449489587-7l5gl\" (UID: \"ffc8f313-fd3f-415d-bc34-4f85e818a562\") " pod="calico-system/calico-typha-8449489587-7l5gl" Apr 14 13:32:23.278420 kubelet[2512]: I0414 13:32:23.278415 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzgc\" (UniqueName: \"kubernetes.io/projected/ffc8f313-fd3f-415d-bc34-4f85e818a562-kube-api-access-7zzgc\") pod \"calico-typha-8449489587-7l5gl\" (UID: \"ffc8f313-fd3f-415d-bc34-4f85e818a562\") " pod="calico-system/calico-typha-8449489587-7l5gl" Apr 14 13:32:23.278420 kubelet[2512]: I0414 13:32:23.278460 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffc8f313-fd3f-415d-bc34-4f85e818a562-tigera-ca-bundle\") pod \"calico-typha-8449489587-7l5gl\" (UID: \"ffc8f313-fd3f-415d-bc34-4f85e818a562\") " pod="calico-system/calico-typha-8449489587-7l5gl" Apr 14 13:32:23.596943 kubelet[2512]: E0414 13:32:23.596559 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:23.600093 containerd[1458]: time="2026-04-14T13:32:23.598436471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8449489587-7l5gl,Uid:ffc8f313-fd3f-415d-bc34-4f85e818a562,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:23.669738 containerd[1458]: time="2026-04-14T13:32:23.668847351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:23.669738 containerd[1458]: time="2026-04-14T13:32:23.669580851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:23.669738 containerd[1458]: time="2026-04-14T13:32:23.669630692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:23.670074 containerd[1458]: time="2026-04-14T13:32:23.669743757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:23.839638 systemd[1]: Started cri-containerd-1372c0d243b703fb0b942d68da9873b3078ef01f0250f062ee72af1f2a0aa414.scope - libcontainer container 1372c0d243b703fb0b942d68da9873b3078ef01f0250f062ee72af1f2a0aa414. Apr 14 13:32:24.100003 systemd[1]: Created slice kubepods-besteffort-pod3112dfc0_a6df_4cc5_8c7b_d5a3a5d76fe2.slice - libcontainer container kubepods-besteffort-pod3112dfc0_a6df_4cc5_8c7b_d5a3a5d76fe2.slice. Apr 14 13:32:24.151123 kubelet[2512]: I0414 13:32:24.150780 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-bpffs\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.151464 kubelet[2512]: I0414 13:32:24.151410 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-policysync\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152000 kubelet[2512]: I0414 13:32:24.151856 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-cni-log-dir\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152000 kubelet[2512]: I0414 13:32:24.151987 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-sys-fs\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152000 kubelet[2512]: I0414 13:32:24.152003 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-var-run-calico\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152487 kubelet[2512]: I0414 13:32:24.152026 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-cni-net-dir\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152487 kubelet[2512]: I0414 13:32:24.152047 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-node-certs\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152487 kubelet[2512]: I0414 13:32:24.152058 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-nodeproc\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152487 kubelet[2512]: I0414 13:32:24.152081 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qw2k\" (UniqueName: \"kubernetes.io/projected/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-kube-api-access-2qw2k\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152487 kubelet[2512]: I0414 13:32:24.152127 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-cni-bin-dir\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152667 kubelet[2512]: I0414 13:32:24.152141 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-lib-modules\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152667 kubelet[2512]: I0414 13:32:24.152155 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-tigera-ca-bundle\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152667 kubelet[2512]: I0414 13:32:24.152167 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-flexvol-driver-host\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.152667 kubelet[2512]: I0414 13:32:24.152178 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-xtables-lock\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.153688 kubelet[2512]: I0414 13:32:24.152779 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2-var-lib-calico\") pod \"calico-node-lsmxn\" (UID: \"3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2\") " pod="calico-system/calico-node-lsmxn" Apr 14 13:32:24.271016 kubelet[2512]: E0414 13:32:24.268311 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:24.271016 kubelet[2512]: W0414 13:32:24.268335 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:24.271016 kubelet[2512]: E0414 13:32:24.268436 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:24.273315 kubelet[2512]: E0414 13:32:24.273196 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:24.273315 kubelet[2512]: W0414 13:32:24.273236 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:24.273315 kubelet[2512]: E0414 13:32:24.273284 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:24.379297 containerd[1458]: time="2026-04-14T13:32:24.376546159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8449489587-7l5gl,Uid:ffc8f313-fd3f-415d-bc34-4f85e818a562,Namespace:calico-system,Attempt:0,} returns sandbox id \"1372c0d243b703fb0b942d68da9873b3078ef01f0250f062ee72af1f2a0aa414\"" Apr 14 13:32:24.391798 kubelet[2512]: E0414 13:32:24.381528 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:24.400579 kubelet[2512]: W0414 13:32:24.391984 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:24.400830 kubelet[2512]: E0414 13:32:24.400646 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:24.520001 containerd[1458]: time="2026-04-14T13:32:24.519020302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lsmxn,Uid:3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:24.531229 kubelet[2512]: E0414 13:32:24.531137 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:24.569107 containerd[1458]: time="2026-04-14T13:32:24.569037900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 14 13:32:24.691452 containerd[1458]: time="2026-04-14T13:32:24.659483406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:24.691452 containerd[1458]: time="2026-04-14T13:32:24.659556364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:24.691452 containerd[1458]: time="2026-04-14T13:32:24.659570736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:24.691452 containerd[1458]: time="2026-04-14T13:32:24.660537725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:24.764589 systemd[1]: Started cri-containerd-ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1.scope - libcontainer container ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1. Apr 14 13:32:24.966212 containerd[1458]: time="2026-04-14T13:32:24.966083536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lsmxn,Uid:3112dfc0-a6df-4cc5-8c7b-d5a3a5d76fe2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\"" Apr 14 13:32:25.083529 kubelet[2512]: E0414 13:32:25.083450 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:25.187362 kubelet[2512]: E0414 13:32:25.187291 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.187362 kubelet[2512]: W0414 13:32:25.187343 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.187362 kubelet[2512]: E0414 13:32:25.187364 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.187694 kubelet[2512]: E0414 13:32:25.187660 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.187694 kubelet[2512]: W0414 13:32:25.187685 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.187694 kubelet[2512]: E0414 13:32:25.187695 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.188391 kubelet[2512]: E0414 13:32:25.188358 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.188391 kubelet[2512]: W0414 13:32:25.188384 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.188391 kubelet[2512]: E0414 13:32:25.188394 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.189516 kubelet[2512]: E0414 13:32:25.189443 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.189516 kubelet[2512]: W0414 13:32:25.189485 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.189516 kubelet[2512]: E0414 13:32:25.189498 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.201377 kubelet[2512]: E0414 13:32:25.200842 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.201377 kubelet[2512]: W0414 13:32:25.200868 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.201377 kubelet[2512]: E0414 13:32:25.200890 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.231883 kubelet[2512]: E0414 13:32:25.229977 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.231883 kubelet[2512]: W0414 13:32:25.231292 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.231883 kubelet[2512]: E0414 13:32:25.231522 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.235870 kubelet[2512]: E0414 13:32:25.235414 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.235870 kubelet[2512]: W0414 13:32:25.235441 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.239207 kubelet[2512]: E0414 13:32:25.237157 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.243401 kubelet[2512]: E0414 13:32:25.243352 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.244572 kubelet[2512]: W0414 13:32:25.244397 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.247507 kubelet[2512]: E0414 13:32:25.247405 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.248970 kubelet[2512]: E0414 13:32:25.248815 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.248970 kubelet[2512]: W0414 13:32:25.248861 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.249023 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.249325 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.249282 kubelet[2512]: W0414 13:32:25.249336 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.249348 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.249520 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.249282 kubelet[2512]: W0414 13:32:25.249529 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.249539 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.250837 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.249282 kubelet[2512]: W0414 13:32:25.250900 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.249282 kubelet[2512]: E0414 13:32:25.251024 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.252969 kubelet[2512]: E0414 13:32:25.252748 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.252969 kubelet[2512]: W0414 13:32:25.252766 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.252969 kubelet[2512]: E0414 13:32:25.252799 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.253335 kubelet[2512]: E0414 13:32:25.253241 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.253335 kubelet[2512]: W0414 13:32:25.253250 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.253335 kubelet[2512]: E0414 13:32:25.253258 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.253492 kubelet[2512]: E0414 13:32:25.253433 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.253492 kubelet[2512]: W0414 13:32:25.253439 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.253492 kubelet[2512]: E0414 13:32:25.253445 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.253797 kubelet[2512]: E0414 13:32:25.253739 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.253797 kubelet[2512]: W0414 13:32:25.253746 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.253797 kubelet[2512]: E0414 13:32:25.253752 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.254083 kubelet[2512]: E0414 13:32:25.254044 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.254083 kubelet[2512]: W0414 13:32:25.254051 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.254083 kubelet[2512]: E0414 13:32:25.254058 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.254306 kubelet[2512]: E0414 13:32:25.254267 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.254306 kubelet[2512]: W0414 13:32:25.254274 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.254306 kubelet[2512]: E0414 13:32:25.254281 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.254535 kubelet[2512]: E0414 13:32:25.254454 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.254535 kubelet[2512]: W0414 13:32:25.254460 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.254535 kubelet[2512]: E0414 13:32:25.254467 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.254742 kubelet[2512]: E0414 13:32:25.254731 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.255000 kubelet[2512]: W0414 13:32:25.254794 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.255000 kubelet[2512]: E0414 13:32:25.254807 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.287019 kubelet[2512]: E0414 13:32:25.286489 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.287345 kubelet[2512]: W0414 13:32:25.286589 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.287899 kubelet[2512]: E0414 13:32:25.287514 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.293740 kubelet[2512]: I0414 13:32:25.293288 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0f11f62-5546-4397-955f-97b1110f25d7-kubelet-dir\") pod \"csi-node-driver-dbfq7\" (UID: \"e0f11f62-5546-4397-955f-97b1110f25d7\") " pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:32:25.300508 kubelet[2512]: E0414 13:32:25.294765 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.300508 kubelet[2512]: W0414 13:32:25.294798 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.300508 kubelet[2512]: E0414 13:32:25.294867 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.304317 kubelet[2512]: E0414 13:32:25.304279 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.304684 kubelet[2512]: W0414 13:32:25.304589 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.304761 kubelet[2512]: E0414 13:32:25.304751 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.310121 kubelet[2512]: E0414 13:32:25.310082 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.310297 kubelet[2512]: W0414 13:32:25.310243 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.310442 kubelet[2512]: E0414 13:32:25.310430 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.315752 kubelet[2512]: E0414 13:32:25.315563 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.316446 kubelet[2512]: W0414 13:32:25.315751 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.316446 kubelet[2512]: E0414 13:32:25.315820 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.317229 kubelet[2512]: I0414 13:32:25.317057 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e0f11f62-5546-4397-955f-97b1110f25d7-registration-dir\") pod \"csi-node-driver-dbfq7\" (UID: \"e0f11f62-5546-4397-955f-97b1110f25d7\") " pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:32:25.325131 kubelet[2512]: E0414 13:32:25.324792 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.325894 kubelet[2512]: W0414 13:32:25.325665 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.327073 kubelet[2512]: E0414 13:32:25.326993 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.327681 kubelet[2512]: E0414 13:32:25.327630 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.327681 kubelet[2512]: W0414 13:32:25.327658 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.327681 kubelet[2512]: E0414 13:32:25.327675 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.327762 kubelet[2512]: I0414 13:32:25.327724 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e0f11f62-5546-4397-955f-97b1110f25d7-socket-dir\") pod \"csi-node-driver-dbfq7\" (UID: \"e0f11f62-5546-4397-955f-97b1110f25d7\") " pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:32:25.335560 kubelet[2512]: E0414 13:32:25.335263 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.335834 kubelet[2512]: W0414 13:32:25.335688 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.335834 kubelet[2512]: E0414 13:32:25.335797 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.340267 kubelet[2512]: E0414 13:32:25.339579 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.351377 kubelet[2512]: W0414 13:32:25.342730 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.356746 kubelet[2512]: E0414 13:32:25.356648 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.356988 kubelet[2512]: I0414 13:32:25.356813 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e0f11f62-5546-4397-955f-97b1110f25d7-varrun\") pod \"csi-node-driver-dbfq7\" (UID: \"e0f11f62-5546-4397-955f-97b1110f25d7\") " pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:32:25.360868 kubelet[2512]: E0414 13:32:25.360784 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.361004 kubelet[2512]: W0414 13:32:25.360848 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.361004 kubelet[2512]: E0414 13:32:25.360964 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.369445 kubelet[2512]: E0414 13:32:25.369365 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.369445 kubelet[2512]: W0414 13:32:25.369434 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.369445 kubelet[2512]: E0414 13:32:25.369472 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.376878 kubelet[2512]: E0414 13:32:25.376146 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.376878 kubelet[2512]: W0414 13:32:25.376197 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.376878 kubelet[2512]: E0414 13:32:25.376221 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.384083 kubelet[2512]: E0414 13:32:25.381198 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.388951 kubelet[2512]: W0414 13:32:25.383338 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.388951 kubelet[2512]: E0414 13:32:25.388693 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.389308 kubelet[2512]: I0414 13:32:25.389242 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m58v7\" (UniqueName: \"kubernetes.io/projected/e0f11f62-5546-4397-955f-97b1110f25d7-kube-api-access-m58v7\") pod \"csi-node-driver-dbfq7\" (UID: \"e0f11f62-5546-4397-955f-97b1110f25d7\") " pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:32:25.446723 kubelet[2512]: E0414 13:32:25.401546 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.447475 kubelet[2512]: W0414 13:32:25.447082 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.447475 kubelet[2512]: E0414 13:32:25.447178 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.456973 kubelet[2512]: E0414 13:32:25.456754 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.456973 kubelet[2512]: W0414 13:32:25.456899 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.457130 kubelet[2512]: E0414 13:32:25.456973 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.520385 kubelet[2512]: E0414 13:32:25.520204 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.520385 kubelet[2512]: W0414 13:32:25.520242 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.520385 kubelet[2512]: E0414 13:32:25.520268 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.520870 kubelet[2512]: E0414 13:32:25.520682 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.520870 kubelet[2512]: W0414 13:32:25.520697 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.520870 kubelet[2512]: E0414 13:32:25.520709 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.527223 kubelet[2512]: E0414 13:32:25.526464 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.527223 kubelet[2512]: W0414 13:32:25.526491 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.528766 kubelet[2512]: E0414 13:32:25.526517 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.528766 kubelet[2512]: E0414 13:32:25.528534 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.528766 kubelet[2512]: W0414 13:32:25.528548 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.528766 kubelet[2512]: E0414 13:32:25.528566 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.538133 kubelet[2512]: E0414 13:32:25.537459 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.539654 kubelet[2512]: W0414 13:32:25.537522 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.544467 kubelet[2512]: E0414 13:32:25.539691 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.550449 kubelet[2512]: E0414 13:32:25.550351 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.551880 kubelet[2512]: W0414 13:32:25.550455 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.551880 kubelet[2512]: E0414 13:32:25.550524 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.556542 kubelet[2512]: E0414 13:32:25.555637 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.557677 kubelet[2512]: W0414 13:32:25.556827 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.557677 kubelet[2512]: E0414 13:32:25.557066 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.566776 kubelet[2512]: E0414 13:32:25.566682 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.566887 kubelet[2512]: W0414 13:32:25.566769 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.567688 kubelet[2512]: E0414 13:32:25.566870 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.569737 kubelet[2512]: E0414 13:32:25.569351 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.570273 kubelet[2512]: W0414 13:32:25.570077 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.570380 kubelet[2512]: E0414 13:32:25.570302 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.570971 kubelet[2512]: E0414 13:32:25.570893 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.571018 kubelet[2512]: W0414 13:32:25.570985 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.571018 kubelet[2512]: E0414 13:32:25.570999 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.571313 kubelet[2512]: E0414 13:32:25.571287 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.571313 kubelet[2512]: W0414 13:32:25.571313 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.571376 kubelet[2512]: E0414 13:32:25.571324 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.571629 kubelet[2512]: E0414 13:32:25.571576 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.571651 kubelet[2512]: W0414 13:32:25.571630 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.571651 kubelet[2512]: E0414 13:32:25.571641 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.572079 kubelet[2512]: E0414 13:32:25.571969 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.572079 kubelet[2512]: W0414 13:32:25.571985 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.572079 kubelet[2512]: E0414 13:32:25.572032 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.588166 kubelet[2512]: E0414 13:32:25.581867 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.588166 kubelet[2512]: W0414 13:32:25.582450 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.588166 kubelet[2512]: E0414 13:32:25.582634 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.588166 kubelet[2512]: E0414 13:32:25.583431 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.588166 kubelet[2512]: W0414 13:32:25.583626 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.588166 kubelet[2512]: E0414 13:32:25.583642 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.599300 kubelet[2512]: E0414 13:32:25.592348 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.599504 kubelet[2512]: W0414 13:32:25.599414 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.599504 kubelet[2512]: E0414 13:32:25.599490 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.600805 kubelet[2512]: E0414 13:32:25.600747 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.600871 kubelet[2512]: W0414 13:32:25.600834 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.600871 kubelet[2512]: E0414 13:32:25.600853 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.611463 kubelet[2512]: E0414 13:32:25.611386 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.611641 kubelet[2512]: W0414 13:32:25.611449 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.611641 kubelet[2512]: E0414 13:32:25.611534 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.615547 kubelet[2512]: E0414 13:32:25.615473 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.615734 kubelet[2512]: W0414 13:32:25.615531 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.615790 kubelet[2512]: E0414 13:32:25.615699 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.616973 kubelet[2512]: E0414 13:32:25.616858 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.616973 kubelet[2512]: W0414 13:32:25.616884 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.617357 kubelet[2512]: E0414 13:32:25.617054 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.617357 kubelet[2512]: E0414 13:32:25.617334 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.617357 kubelet[2512]: W0414 13:32:25.617342 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.617357 kubelet[2512]: E0414 13:32:25.617353 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.617580 kubelet[2512]: E0414 13:32:25.617513 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.617580 kubelet[2512]: W0414 13:32:25.617556 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.617580 kubelet[2512]: E0414 13:32:25.617568 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.617975 kubelet[2512]: E0414 13:32:25.617887 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.617975 kubelet[2512]: W0414 13:32:25.617974 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.618045 kubelet[2512]: E0414 13:32:25.617986 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.624719 kubelet[2512]: E0414 13:32:25.624615 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.624719 kubelet[2512]: W0414 13:32:25.624692 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.624947 kubelet[2512]: E0414 13:32:25.624784 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.627119 kubelet[2512]: E0414 13:32:25.625200 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.627119 kubelet[2512]: W0414 13:32:25.625210 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.627119 kubelet[2512]: E0414 13:32:25.625219 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:25.742804 kubelet[2512]: E0414 13:32:25.742721 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:25.742804 kubelet[2512]: W0414 13:32:25.742794 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:25.743409 kubelet[2512]: E0414 13:32:25.742845 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:26.577949 kubelet[2512]: E0414 13:32:26.577852 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:27.489742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879139736.mount: Deactivated successfully. Apr 14 13:32:28.584055 kubelet[2512]: E0414 13:32:28.583943 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:29.145553 containerd[1458]: time="2026-04-14T13:32:29.142565519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:29.147545 containerd[1458]: time="2026-04-14T13:32:29.147436901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 14 13:32:29.149849 containerd[1458]: time="2026-04-14T13:32:29.149766401Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:29.158438 containerd[1458]: time="2026-04-14T13:32:29.158348431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:29.162215 containerd[1458]: time="2026-04-14T13:32:29.161818906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.592722011s" Apr 14 13:32:29.162431 containerd[1458]: time="2026-04-14T13:32:29.162286600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 14 13:32:29.176992 containerd[1458]: time="2026-04-14T13:32:29.176651981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 14 13:32:29.243479 containerd[1458]: time="2026-04-14T13:32:29.243353775Z" level=info msg="CreateContainer within sandbox \"1372c0d243b703fb0b942d68da9873b3078ef01f0250f062ee72af1f2a0aa414\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 14 13:32:29.383188 containerd[1458]: time="2026-04-14T13:32:29.382372575Z" level=info msg="CreateContainer within sandbox \"1372c0d243b703fb0b942d68da9873b3078ef01f0250f062ee72af1f2a0aa414\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed6404b7425a467d87c9059f5159702d3eb9f0fc3508af4b12eda0e57f2cd9c3\"" Apr 14 13:32:29.419539 containerd[1458]: time="2026-04-14T13:32:29.417780823Z" level=info msg="StartContainer for \"ed6404b7425a467d87c9059f5159702d3eb9f0fc3508af4b12eda0e57f2cd9c3\"" Apr 14 13:32:29.542662 systemd[1]: Started cri-containerd-ed6404b7425a467d87c9059f5159702d3eb9f0fc3508af4b12eda0e57f2cd9c3.scope - libcontainer container ed6404b7425a467d87c9059f5159702d3eb9f0fc3508af4b12eda0e57f2cd9c3. Apr 14 13:32:29.706148 containerd[1458]: time="2026-04-14T13:32:29.705881250Z" level=info msg="StartContainer for \"ed6404b7425a467d87c9059f5159702d3eb9f0fc3508af4b12eda0e57f2cd9c3\" returns successfully" Apr 14 13:32:30.536053 kubelet[2512]: E0414 13:32:30.535493 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:30.581214 kubelet[2512]: E0414 13:32:30.580778 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:30.694334 kubelet[2512]: E0414 13:32:30.694206 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.694334 kubelet[2512]: W0414 13:32:30.694292 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.694334 kubelet[2512]: E0414 13:32:30.694351 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.721724 kubelet[2512]: E0414 13:32:30.716759 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.722510 kubelet[2512]: W0414 13:32:30.722290 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.722510 kubelet[2512]: E0414 13:32:30.722481 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.727881 kubelet[2512]: E0414 13:32:30.727818 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.728106 kubelet[2512]: W0414 13:32:30.727868 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.728106 kubelet[2512]: E0414 13:32:30.727979 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.737125 kubelet[2512]: E0414 13:32:30.736945 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.737125 kubelet[2512]: W0414 13:32:30.737094 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.737125 kubelet[2512]: E0414 13:32:30.737175 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.748748 kubelet[2512]: E0414 13:32:30.746275 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.748748 kubelet[2512]: W0414 13:32:30.746297 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.748748 kubelet[2512]: E0414 13:32:30.746315 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.748748 kubelet[2512]: E0414 13:32:30.746633 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.748748 kubelet[2512]: W0414 13:32:30.746691 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.748748 kubelet[2512]: E0414 13:32:30.746703 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.756124 kubelet[2512]: E0414 13:32:30.754740 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.756124 kubelet[2512]: W0414 13:32:30.755146 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.756124 kubelet[2512]: E0414 13:32:30.755280 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.774251 kubelet[2512]: E0414 13:32:30.773873 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.779241 kubelet[2512]: W0414 13:32:30.775060 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.781019 kubelet[2512]: E0414 13:32:30.780143 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.842063 kubelet[2512]: E0414 13:32:30.838803 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.842063 kubelet[2512]: W0414 13:32:30.839690 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.846716 kubelet[2512]: E0414 13:32:30.846551 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.847310 kubelet[2512]: E0414 13:32:30.847253 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.847395 kubelet[2512]: W0414 13:32:30.847305 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.847395 kubelet[2512]: E0414 13:32:30.847375 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.847742 kubelet[2512]: E0414 13:32:30.847700 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.847742 kubelet[2512]: W0414 13:32:30.847736 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.847819 kubelet[2512]: E0414 13:32:30.847750 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.848059 kubelet[2512]: E0414 13:32:30.848020 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.848059 kubelet[2512]: W0414 13:32:30.848054 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.848138 kubelet[2512]: E0414 13:32:30.848066 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.850045 kubelet[2512]: E0414 13:32:30.849993 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.850045 kubelet[2512]: W0414 13:32:30.850025 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.850207 kubelet[2512]: E0414 13:32:30.850069 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.857030 kubelet[2512]: E0414 13:32:30.856871 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.857233 kubelet[2512]: W0414 13:32:30.857016 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.857233 kubelet[2512]: E0414 13:32:30.857115 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.857721 kubelet[2512]: E0414 13:32:30.857580 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.857721 kubelet[2512]: W0414 13:32:30.857644 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.857721 kubelet[2512]: E0414 13:32:30.857661 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.859492 kubelet[2512]: E0414 13:32:30.859348 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.859746 kubelet[2512]: W0414 13:32:30.859476 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.859746 kubelet[2512]: E0414 13:32:30.859540 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.861080 kubelet[2512]: E0414 13:32:30.859845 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.861080 kubelet[2512]: W0414 13:32:30.859855 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.861080 kubelet[2512]: E0414 13:32:30.859865 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.861080 kubelet[2512]: E0414 13:32:30.860044 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.861080 kubelet[2512]: W0414 13:32:30.860050 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.861080 kubelet[2512]: E0414 13:32:30.860057 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.869053 kubelet[2512]: E0414 13:32:30.866244 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.869657 kubelet[2512]: W0414 13:32:30.869032 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.869657 kubelet[2512]: E0414 13:32:30.869238 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.883161 kubelet[2512]: E0414 13:32:30.883083 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.883161 kubelet[2512]: W0414 13:32:30.883126 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.883161 kubelet[2512]: E0414 13:32:30.883181 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.885872 kubelet[2512]: E0414 13:32:30.885213 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.886389 kubelet[2512]: W0414 13:32:30.886199 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.886427 kubelet[2512]: E0414 13:32:30.886353 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.898205 kubelet[2512]: E0414 13:32:30.898058 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.898205 kubelet[2512]: W0414 13:32:30.898173 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.898752 kubelet[2512]: E0414 13:32:30.898259 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.942512 kubelet[2512]: E0414 13:32:30.942416 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.942512 kubelet[2512]: W0414 13:32:30.942483 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.943879 kubelet[2512]: E0414 13:32:30.942591 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.950317 kubelet[2512]: E0414 13:32:30.946852 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.950317 kubelet[2512]: W0414 13:32:30.946892 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.950317 kubelet[2512]: E0414 13:32:30.948336 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.953100 kubelet[2512]: E0414 13:32:30.952650 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.961406 kubelet[2512]: W0414 13:32:30.960767 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.965970 kubelet[2512]: E0414 13:32:30.963443 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.972584 kubelet[2512]: E0414 13:32:30.971726 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.972584 kubelet[2512]: W0414 13:32:30.971759 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.972584 kubelet[2512]: E0414 13:32:30.971833 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.987213 kubelet[2512]: E0414 13:32:30.986307 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.987691 kubelet[2512]: W0414 13:32:30.987403 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.987691 kubelet[2512]: E0414 13:32:30.987434 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.990553 kubelet[2512]: E0414 13:32:30.988101 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.990553 kubelet[2512]: W0414 13:32:30.989608 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.990553 kubelet[2512]: E0414 13:32:30.989761 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.994512 kubelet[2512]: E0414 13:32:30.994489 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.994606 kubelet[2512]: W0414 13:32:30.994591 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.994720 kubelet[2512]: E0414 13:32:30.994708 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:30.995123 kubelet[2512]: E0414 13:32:30.995083 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:30.995205 kubelet[2512]: W0414 13:32:30.995194 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:30.995261 kubelet[2512]: E0414 13:32:30.995252 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.043272 kubelet[2512]: E0414 13:32:31.043170 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.043586 kubelet[2512]: W0414 13:32:31.043527 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.049895 kubelet[2512]: E0414 13:32:31.048318 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.144547 kubelet[2512]: E0414 13:32:31.144314 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.144547 kubelet[2512]: W0414 13:32:31.144429 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.144547 kubelet[2512]: E0414 13:32:31.144479 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.164295 kubelet[2512]: E0414 13:32:31.163586 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.164608 kubelet[2512]: W0414 13:32:31.164455 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.164608 kubelet[2512]: E0414 13:32:31.164595 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.575007 kubelet[2512]: I0414 13:32:31.572655 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 13:32:31.587665 kubelet[2512]: E0414 13:32:31.587439 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:31.640152 containerd[1458]: time="2026-04-14T13:32:31.637505259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:31.658785 containerd[1458]: time="2026-04-14T13:32:31.654760935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 14 13:32:31.664482 containerd[1458]: time="2026-04-14T13:32:31.663839526Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:31.687803 containerd[1458]: time="2026-04-14T13:32:31.687507061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:31.692014 containerd[1458]: time="2026-04-14T13:32:31.691521038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.514775096s" Apr 14 13:32:31.692014 containerd[1458]: time="2026-04-14T13:32:31.691597281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 14 13:32:31.700152 kubelet[2512]: E0414 13:32:31.699358 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.759857 kubelet[2512]: W0414 13:32:31.758641 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.762053 kubelet[2512]: E0414 13:32:31.761951 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.763996 kubelet[2512]: E0414 13:32:31.763888 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.763996 kubelet[2512]: W0414 13:32:31.763983 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.764157 kubelet[2512]: E0414 13:32:31.764005 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.769671 kubelet[2512]: E0414 13:32:31.769253 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.776574 kubelet[2512]: W0414 13:32:31.769821 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.781406 containerd[1458]: time="2026-04-14T13:32:31.781143923Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 14 13:32:31.786262 kubelet[2512]: E0414 13:32:31.783605 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.806320 kubelet[2512]: E0414 13:32:31.806227 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.815064 kubelet[2512]: W0414 13:32:31.810811 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.815064 kubelet[2512]: E0414 13:32:31.811351 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.828671 kubelet[2512]: E0414 13:32:31.828476 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.840661 kubelet[2512]: W0414 13:32:31.840466 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.844004 kubelet[2512]: E0414 13:32:31.843899 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.851107 kubelet[2512]: E0414 13:32:31.851037 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.851382 kubelet[2512]: W0414 13:32:31.851290 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.851457 kubelet[2512]: E0414 13:32:31.851447 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.854005 containerd[1458]: time="2026-04-14T13:32:31.853875613Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421\"" Apr 14 13:32:31.882145 kubelet[2512]: E0414 13:32:31.881817 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.895527 kubelet[2512]: W0414 13:32:31.884826 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.896010 kubelet[2512]: E0414 13:32:31.895755 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.900430 kubelet[2512]: E0414 13:32:31.899714 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.900835 kubelet[2512]: W0414 13:32:31.900710 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.920519 kubelet[2512]: E0414 13:32:31.900861 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.971279 containerd[1458]: time="2026-04-14T13:32:31.957042338Z" level=info msg="StartContainer for \"13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421\"" Apr 14 13:32:31.980588 kubelet[2512]: E0414 13:32:31.980269 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.989829 kubelet[2512]: W0414 13:32:31.989359 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.989829 kubelet[2512]: E0414 13:32:31.989575 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.990093 kubelet[2512]: E0414 13:32:31.990025 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.990093 kubelet[2512]: W0414 13:32:31.990038 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.990093 kubelet[2512]: E0414 13:32:31.990051 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.990729 kubelet[2512]: E0414 13:32:31.990586 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.991237 kubelet[2512]: W0414 13:32:31.990763 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.991237 kubelet[2512]: E0414 13:32:31.990800 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:31.999127 kubelet[2512]: E0414 13:32:31.999019 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:31.999313 kubelet[2512]: W0414 13:32:31.999174 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:31.999313 kubelet[2512]: E0414 13:32:31.999243 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.006720 kubelet[2512]: E0414 13:32:32.006586 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.007094 kubelet[2512]: W0414 13:32:32.006724 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.007094 kubelet[2512]: E0414 13:32:32.006782 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.008234 kubelet[2512]: E0414 13:32:32.008106 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.008234 kubelet[2512]: W0414 13:32:32.008139 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.008234 kubelet[2512]: E0414 13:32:32.008151 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.008334 kubelet[2512]: E0414 13:32:32.008270 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.008334 kubelet[2512]: W0414 13:32:32.008275 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.008334 kubelet[2512]: E0414 13:32:32.008284 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.164945 kubelet[2512]: E0414 13:32:32.160349 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.168868 kubelet[2512]: W0414 13:32:32.168697 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.168868 kubelet[2512]: E0414 13:32:32.168801 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.169464 kubelet[2512]: E0414 13:32:32.169434 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.169464 kubelet[2512]: W0414 13:32:32.169447 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.169464 kubelet[2512]: E0414 13:32:32.169459 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.173971 kubelet[2512]: E0414 13:32:32.172399 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.173971 kubelet[2512]: W0414 13:32:32.172496 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.173971 kubelet[2512]: E0414 13:32:32.172602 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.180864 kubelet[2512]: E0414 13:32:32.180654 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.180864 kubelet[2512]: W0414 13:32:32.180789 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.181169 kubelet[2512]: E0414 13:32:32.180890 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.193022 kubelet[2512]: E0414 13:32:32.192260 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.193022 kubelet[2512]: W0414 13:32:32.192287 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.193022 kubelet[2512]: E0414 13:32:32.192357 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.217747 kubelet[2512]: E0414 13:32:32.216709 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.217747 kubelet[2512]: W0414 13:32:32.216779 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.217747 kubelet[2512]: E0414 13:32:32.216858 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.217747 kubelet[2512]: E0414 13:32:32.217705 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.217747 kubelet[2512]: W0414 13:32:32.217778 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.217747 kubelet[2512]: E0414 13:32:32.217797 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.218949 kubelet[2512]: E0414 13:32:32.218136 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.218949 kubelet[2512]: W0414 13:32:32.218145 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.218949 kubelet[2512]: E0414 13:32:32.218155 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.220535 systemd[1]: Started cri-containerd-13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421.scope - libcontainer container 13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421. Apr 14 13:32:32.225404 kubelet[2512]: E0414 13:32:32.225015 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.225404 kubelet[2512]: W0414 13:32:32.225038 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.225404 kubelet[2512]: E0414 13:32:32.225106 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.228294 kubelet[2512]: E0414 13:32:32.225452 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.228294 kubelet[2512]: W0414 13:32:32.225464 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.228294 kubelet[2512]: E0414 13:32:32.225478 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.228294 kubelet[2512]: E0414 13:32:32.227053 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.228294 kubelet[2512]: W0414 13:32:32.227118 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.228294 kubelet[2512]: E0414 13:32:32.227177 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.231974 kubelet[2512]: E0414 13:32:32.231075 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.234887 kubelet[2512]: W0414 13:32:32.232844 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.236114 kubelet[2512]: E0414 13:32:32.236047 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.236597 kubelet[2512]: E0414 13:32:32.236584 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.236747 kubelet[2512]: W0414 13:32:32.236704 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.236826 kubelet[2512]: E0414 13:32:32.236812 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.237316 kubelet[2512]: E0414 13:32:32.237305 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.237389 kubelet[2512]: W0414 13:32:32.237379 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.237452 kubelet[2512]: E0414 13:32:32.237441 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.242473 kubelet[2512]: E0414 13:32:32.242336 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.242473 kubelet[2512]: W0414 13:32:32.242350 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.242473 kubelet[2512]: E0414 13:32:32.242364 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.245790 kubelet[2512]: E0414 13:32:32.245776 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.246080 kubelet[2512]: W0414 13:32:32.245848 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.246080 kubelet[2512]: E0414 13:32:32.245861 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.250580 kubelet[2512]: E0414 13:32:32.250542 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.251010 kubelet[2512]: W0414 13:32:32.250966 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.251425 kubelet[2512]: E0414 13:32:32.251077 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.252669 kubelet[2512]: E0414 13:32:32.252535 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:32.252669 kubelet[2512]: W0414 13:32:32.252555 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:32.252669 kubelet[2512]: E0414 13:32:32.252606 2512 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:32.516283 containerd[1458]: time="2026-04-14T13:32:32.516100892Z" level=info msg="StartContainer for \"13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421\" returns successfully" Apr 14 13:32:32.553282 systemd[1]: cri-containerd-13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421.scope: Deactivated successfully. Apr 14 13:32:32.652966 kubelet[2512]: E0414 13:32:32.652453 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:32.754171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421-rootfs.mount: Deactivated successfully. Apr 14 13:32:32.769762 containerd[1458]: time="2026-04-14T13:32:32.762681106Z" level=info msg="shim disconnected" id=13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421 namespace=k8s.io Apr 14 13:32:32.769762 containerd[1458]: time="2026-04-14T13:32:32.769681010Z" level=warning msg="cleaning up after shim disconnected" id=13a974f12ad528019dbfc136c41ccb52b5f0e50ce3a7ec8f265f6e3def4aa421 namespace=k8s.io Apr 14 13:32:32.769762 containerd[1458]: time="2026-04-14T13:32:32.769697699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:32:32.898089 kubelet[2512]: I0414 13:32:32.889893 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8449489587-7l5gl" podStartSLOduration=6.26601846 podStartE2EDuration="10.889871455s" podCreationTimestamp="2026-04-14 13:32:22 +0000 UTC" firstStartedPulling="2026-04-14 13:32:24.546819922 +0000 UTC m=+29.383152623" lastFinishedPulling="2026-04-14 13:32:29.170672948 +0000 UTC m=+34.007005618" observedRunningTime="2026-04-14 13:32:31.042302674 +0000 UTC m=+35.878635338" watchObservedRunningTime="2026-04-14 13:32:32.889871455 +0000 UTC m=+37.726204126" Apr 14 13:32:33.695883 containerd[1458]: time="2026-04-14T13:32:33.695183639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 14 13:32:34.581084 kubelet[2512]: E0414 13:32:34.580298 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:36.591689 kubelet[2512]: E0414 13:32:36.591532 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:38.591415 kubelet[2512]: E0414 13:32:38.591269 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:40.579808 kubelet[2512]: E0414 13:32:40.579636 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:42.663605 kubelet[2512]: E0414 13:32:42.663099 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:44.582004 kubelet[2512]: E0414 13:32:44.581860 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:45.933023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909714379.mount: Deactivated successfully. Apr 14 13:32:46.001210 containerd[1458]: time="2026-04-14T13:32:46.000061188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:46.046413 containerd[1458]: time="2026-04-14T13:32:46.046230304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 14 13:32:46.049995 containerd[1458]: time="2026-04-14T13:32:46.049811305Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:46.054174 containerd[1458]: time="2026-04-14T13:32:46.054071692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:46.061044 containerd[1458]: time="2026-04-14T13:32:46.060769159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.364846098s" Apr 14 13:32:46.064380 containerd[1458]: time="2026-04-14T13:32:46.063582029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 14 13:32:46.099099 containerd[1458]: time="2026-04-14T13:32:46.097483064Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 14 13:32:46.282467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872598319.mount: Deactivated successfully. Apr 14 13:32:46.313858 containerd[1458]: time="2026-04-14T13:32:46.313526886Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac\"" Apr 14 13:32:46.325060 containerd[1458]: time="2026-04-14T13:32:46.324875331Z" level=info msg="StartContainer for \"fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac\"" Apr 14 13:32:46.581380 kubelet[2512]: E0414 13:32:46.581159 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:46.641461 systemd[1]: Started cri-containerd-fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac.scope - libcontainer container fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac. Apr 14 13:32:46.888562 containerd[1458]: time="2026-04-14T13:32:46.888321272Z" level=info msg="StartContainer for \"fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac\" returns successfully" Apr 14 13:32:47.174220 systemd[1]: cri-containerd-fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac.scope: Deactivated successfully. Apr 14 13:32:47.273320 containerd[1458]: time="2026-04-14T13:32:47.272251355Z" level=info msg="shim disconnected" id=fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac namespace=k8s.io Apr 14 13:32:47.273320 containerd[1458]: time="2026-04-14T13:32:47.272413252Z" level=warning msg="cleaning up after shim disconnected" id=fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac namespace=k8s.io Apr 14 13:32:47.273320 containerd[1458]: time="2026-04-14T13:32:47.272421706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:32:47.272607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd12e8ab7dfd469054150dececb1371f418a527715752749bc276f0011522aac-rootfs.mount: Deactivated successfully. Apr 14 13:32:47.356613 containerd[1458]: time="2026-04-14T13:32:47.356052221Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:32:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:32:48.175433 containerd[1458]: time="2026-04-14T13:32:48.173029436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 14 13:32:48.558962 kubelet[2512]: I0414 13:32:48.558730 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 13:32:48.564233 kubelet[2512]: E0414 13:32:48.563642 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:48.596835 kubelet[2512]: E0414 13:32:48.584783 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:49.226079 kubelet[2512]: E0414 13:32:49.225866 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:50.605810 kubelet[2512]: E0414 13:32:50.603680 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:52.595976 kubelet[2512]: E0414 13:32:52.595747 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:54.592070 kubelet[2512]: E0414 13:32:54.589709 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:54.683347 containerd[1458]: time="2026-04-14T13:32:54.682566371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:54.692212 containerd[1458]: time="2026-04-14T13:32:54.691330854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 14 13:32:54.720698 containerd[1458]: time="2026-04-14T13:32:54.699242599Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:54.746871 containerd[1458]: time="2026-04-14T13:32:54.746824544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:54.751241 containerd[1458]: time="2026-04-14T13:32:54.750566349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 6.57742573s" Apr 14 13:32:54.751241 containerd[1458]: time="2026-04-14T13:32:54.750672727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 14 13:32:54.845191 containerd[1458]: time="2026-04-14T13:32:54.843623600Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 13:32:54.942498 containerd[1458]: time="2026-04-14T13:32:54.899270565Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7\"" Apr 14 13:32:54.963615 containerd[1458]: time="2026-04-14T13:32:54.963471853Z" level=info msg="StartContainer for \"4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7\"" Apr 14 13:32:55.064984 systemd[1]: run-containerd-runc-k8s.io-4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7-runc.PtHxPq.mount: Deactivated successfully. Apr 14 13:32:55.099623 systemd[1]: Started cri-containerd-4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7.scope - libcontainer container 4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7. Apr 14 13:32:55.288339 containerd[1458]: time="2026-04-14T13:32:55.288177736Z" level=info msg="StartContainer for \"4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7\" returns successfully" Apr 14 13:32:56.594019 kubelet[2512]: E0414 13:32:56.593599 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:32:57.879984 containerd[1458]: time="2026-04-14T13:32:57.879244712Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:32:57.882519 systemd[1]: cri-containerd-4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7.scope: Deactivated successfully. Apr 14 13:32:57.884273 systemd[1]: cri-containerd-4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7.scope: Consumed 1.631s CPU time. Apr 14 13:32:58.105659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7-rootfs.mount: Deactivated successfully. Apr 14 13:32:58.114100 kubelet[2512]: I0414 13:32:58.113313 2512 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 14 13:32:58.244328 containerd[1458]: time="2026-04-14T13:32:58.243862284Z" level=info msg="shim disconnected" id=4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7 namespace=k8s.io Apr 14 13:32:58.244328 containerd[1458]: time="2026-04-14T13:32:58.244336172Z" level=warning msg="cleaning up after shim disconnected" id=4a8ab6e61b6ee6660ec76e056a798d463bce159d026285c55bb6fc4dd17033d7 namespace=k8s.io Apr 14 13:32:58.251810 containerd[1458]: time="2026-04-14T13:32:58.244366480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:32:58.687058 systemd[1]: Created slice kubepods-burstable-podfbc203ea_65cb_4880_91f1_00f13ee08f83.slice - libcontainer container kubepods-burstable-podfbc203ea_65cb_4880_91f1_00f13ee08f83.slice. Apr 14 13:32:58.734788 containerd[1458]: time="2026-04-14T13:32:58.732731444Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 14 13:32:58.750455 kubelet[2512]: I0414 13:32:58.749832 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-ca-bundle\") pod \"whisker-75dbfc9fc8-snl69\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " pod="calico-system/whisker-75dbfc9fc8-snl69" Apr 14 13:32:58.752979 kubelet[2512]: I0414 13:32:58.752875 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-nginx-config\") pod \"whisker-75dbfc9fc8-snl69\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " pod="calico-system/whisker-75dbfc9fc8-snl69" Apr 14 13:32:58.752979 kubelet[2512]: I0414 13:32:58.752981 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-backend-key-pair\") pod \"whisker-75dbfc9fc8-snl69\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " pod="calico-system/whisker-75dbfc9fc8-snl69" Apr 14 13:32:58.753140 kubelet[2512]: I0414 13:32:58.753012 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qghf\" (UniqueName: \"kubernetes.io/projected/fbc203ea-65cb-4880-91f1-00f13ee08f83-kube-api-access-6qghf\") pod \"coredns-66bc5c9577-g5q72\" (UID: \"fbc203ea-65cb-4880-91f1-00f13ee08f83\") " pod="kube-system/coredns-66bc5c9577-g5q72" Apr 14 13:32:58.753140 kubelet[2512]: I0414 13:32:58.753072 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc203ea-65cb-4880-91f1-00f13ee08f83-config-volume\") pod \"coredns-66bc5c9577-g5q72\" (UID: \"fbc203ea-65cb-4880-91f1-00f13ee08f83\") " pod="kube-system/coredns-66bc5c9577-g5q72" Apr 14 13:32:58.753140 kubelet[2512]: I0414 13:32:58.753088 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9tr\" (UniqueName: \"kubernetes.io/projected/59b4dce6-3ea4-42d3-8deb-202db303fb14-kube-api-access-dl9tr\") pod \"whisker-75dbfc9fc8-snl69\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " pod="calico-system/whisker-75dbfc9fc8-snl69" Apr 14 13:32:58.785485 containerd[1458]: time="2026-04-14T13:32:58.784288015Z" level=info msg="CreateContainer within sandbox \"ca5260a8e252189afc57856a5f9e41c2dbe4f49d9e9edbe0bcecd9b90970f1a1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69\"" Apr 14 13:32:58.799664 systemd[1]: Created slice kubepods-besteffort-pode0f11f62_5546_4397_955f_97b1110f25d7.slice - libcontainer container kubepods-besteffort-pode0f11f62_5546_4397_955f_97b1110f25d7.slice. Apr 14 13:32:58.817049 containerd[1458]: time="2026-04-14T13:32:58.812451466Z" level=info msg="StartContainer for \"9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69\"" Apr 14 13:32:58.868187 kubelet[2512]: I0414 13:32:58.863193 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jnr\" (UniqueName: \"kubernetes.io/projected/8dce0864-7c1c-4c82-8be6-3d53a4d967af-kube-api-access-z6jnr\") pod \"calico-apiserver-677c4b66cd-bnqqz\" (UID: \"8dce0864-7c1c-4c82-8be6-3d53a4d967af\") " pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" Apr 14 13:32:58.868187 kubelet[2512]: I0414 13:32:58.863261 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxn6d\" (UniqueName: \"kubernetes.io/projected/373fc5da-85f1-463d-a6ba-0ede19c097b3-kube-api-access-pxn6d\") pod \"coredns-66bc5c9577-dswbh\" (UID: \"373fc5da-85f1-463d-a6ba-0ede19c097b3\") " pod="kube-system/coredns-66bc5c9577-dswbh" Apr 14 13:32:58.868187 kubelet[2512]: I0414 13:32:58.863275 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c593b64b-dfef-4876-b0f3-e403e442c5f4-config\") pod \"goldmane-cccfbd5cf-w9cxz\" (UID: \"c593b64b-dfef-4876-b0f3-e403e442c5f4\") " pod="calico-system/goldmane-cccfbd5cf-w9cxz" Apr 14 13:32:58.868187 kubelet[2512]: I0414 13:32:58.863289 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6h6k\" (UniqueName: \"kubernetes.io/projected/c593b64b-dfef-4876-b0f3-e403e442c5f4-kube-api-access-d6h6k\") pod \"goldmane-cccfbd5cf-w9cxz\" (UID: \"c593b64b-dfef-4876-b0f3-e403e442c5f4\") " pod="calico-system/goldmane-cccfbd5cf-w9cxz" Apr 14 13:32:58.868187 kubelet[2512]: I0414 13:32:58.863312 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a48f0b0-2c86-41cd-b28b-7d4223f81409-calico-apiserver-certs\") pod \"calico-apiserver-677c4b66cd-p7zmd\" (UID: \"7a48f0b0-2c86-41cd-b28b-7d4223f81409\") " pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" Apr 14 13:32:58.868635 kubelet[2512]: I0414 13:32:58.863326 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c593b64b-dfef-4876-b0f3-e403e442c5f4-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-w9cxz\" (UID: \"c593b64b-dfef-4876-b0f3-e403e442c5f4\") " pod="calico-system/goldmane-cccfbd5cf-w9cxz" Apr 14 13:32:58.868635 kubelet[2512]: I0414 13:32:58.863345 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/373fc5da-85f1-463d-a6ba-0ede19c097b3-config-volume\") pod \"coredns-66bc5c9577-dswbh\" (UID: \"373fc5da-85f1-463d-a6ba-0ede19c097b3\") " pod="kube-system/coredns-66bc5c9577-dswbh" Apr 14 13:32:58.868635 kubelet[2512]: I0414 13:32:58.863356 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4783cd1c-b4fb-4d25-b891-52cfc6659501-tigera-ca-bundle\") pod \"calico-kube-controllers-fd6dc49cc-7555d\" (UID: \"4783cd1c-b4fb-4d25-b891-52cfc6659501\") " pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" Apr 14 13:32:58.868635 kubelet[2512]: I0414 13:32:58.863392 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8dce0864-7c1c-4c82-8be6-3d53a4d967af-calico-apiserver-certs\") pod \"calico-apiserver-677c4b66cd-bnqqz\" (UID: \"8dce0864-7c1c-4c82-8be6-3d53a4d967af\") " pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" Apr 14 13:32:58.868635 kubelet[2512]: I0414 13:32:58.863404 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c593b64b-dfef-4876-b0f3-e403e442c5f4-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-w9cxz\" (UID: \"c593b64b-dfef-4876-b0f3-e403e442c5f4\") " pod="calico-system/goldmane-cccfbd5cf-w9cxz" Apr 14 13:32:58.868722 kubelet[2512]: I0414 13:32:58.863436 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf5s6\" (UniqueName: \"kubernetes.io/projected/4783cd1c-b4fb-4d25-b891-52cfc6659501-kube-api-access-pf5s6\") pod \"calico-kube-controllers-fd6dc49cc-7555d\" (UID: \"4783cd1c-b4fb-4d25-b891-52cfc6659501\") " pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" Apr 14 13:32:58.868722 kubelet[2512]: I0414 13:32:58.863450 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkw27\" (UniqueName: \"kubernetes.io/projected/7a48f0b0-2c86-41cd-b28b-7d4223f81409-kube-api-access-dkw27\") pod \"calico-apiserver-677c4b66cd-p7zmd\" (UID: \"7a48f0b0-2c86-41cd-b28b-7d4223f81409\") " pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" Apr 14 13:32:58.871512 containerd[1458]: time="2026-04-14T13:32:58.871431782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbfq7,Uid:e0f11f62-5546-4397-955f-97b1110f25d7,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:58.879606 systemd[1]: Created slice kubepods-besteffort-pod4783cd1c_b4fb_4d25_b891_52cfc6659501.slice - libcontainer container kubepods-besteffort-pod4783cd1c_b4fb_4d25_b891_52cfc6659501.slice. Apr 14 13:32:59.044032 systemd[1]: Created slice kubepods-besteffort-podc593b64b_dfef_4876_b0f3_e403e442c5f4.slice - libcontainer container kubepods-besteffort-podc593b64b_dfef_4876_b0f3_e403e442c5f4.slice. Apr 14 13:32:59.087630 kubelet[2512]: E0414 13:32:59.086327 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:59.099210 containerd[1458]: time="2026-04-14T13:32:59.099166457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g5q72,Uid:fbc203ea-65cb-4880-91f1-00f13ee08f83,Namespace:kube-system,Attempt:0,}" Apr 14 13:32:59.156397 systemd[1]: Started cri-containerd-9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69.scope - libcontainer container 9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69. Apr 14 13:32:59.159443 systemd[1]: Created slice kubepods-besteffort-pod59b4dce6_3ea4_42d3_8deb_202db303fb14.slice - libcontainer container kubepods-besteffort-pod59b4dce6_3ea4_42d3_8deb_202db303fb14.slice. Apr 14 13:32:59.225686 systemd[1]: Created slice kubepods-burstable-pod373fc5da_85f1_463d_a6ba_0ede19c097b3.slice - libcontainer container kubepods-burstable-pod373fc5da_85f1_463d_a6ba_0ede19c097b3.slice. Apr 14 13:32:59.396280 systemd[1]: Created slice kubepods-besteffort-pod8dce0864_7c1c_4c82_8be6_3d53a4d967af.slice - libcontainer container kubepods-besteffort-pod8dce0864_7c1c_4c82_8be6_3d53a4d967af.slice. Apr 14 13:32:59.466042 containerd[1458]: time="2026-04-14T13:32:59.465866434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75dbfc9fc8-snl69,Uid:59b4dce6-3ea4-42d3-8deb-202db303fb14,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:59.491264 systemd[1]: Created slice kubepods-besteffort-pod7a48f0b0_2c86_41cd_b28b_7d4223f81409.slice - libcontainer container kubepods-besteffort-pod7a48f0b0_2c86_41cd_b28b_7d4223f81409.slice. Apr 14 13:32:59.528045 containerd[1458]: time="2026-04-14T13:32:59.527975446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-p7zmd,Uid:7a48f0b0-2c86-41cd-b28b-7d4223f81409,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:59.546801 kubelet[2512]: E0414 13:32:59.546681 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:59.548174 containerd[1458]: time="2026-04-14T13:32:59.548059285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dswbh,Uid:373fc5da-85f1-463d-a6ba-0ede19c097b3,Namespace:kube-system,Attempt:0,}" Apr 14 13:32:59.573004 containerd[1458]: time="2026-04-14T13:32:59.572711524Z" level=info msg="StartContainer for \"9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69\" returns successfully" Apr 14 13:32:59.672105 containerd[1458]: time="2026-04-14T13:32:59.671487517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6dc49cc-7555d,Uid:4783cd1c-b4fb-4d25-b891-52cfc6659501,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:59.757880 containerd[1458]: time="2026-04-14T13:32:59.757704504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9cxz,Uid:c593b64b-dfef-4876-b0f3-e403e442c5f4,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:59.952055 containerd[1458]: time="2026-04-14T13:32:59.951552966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-bnqqz,Uid:8dce0864-7c1c-4c82-8be6-3d53a4d967af,Namespace:calico-system,Attempt:0,}" Apr 14 13:33:00.484774 containerd[1458]: time="2026-04-14T13:33:00.480382846Z" level=error msg="Failed to destroy network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.489097 containerd[1458]: time="2026-04-14T13:33:00.488840925Z" level=error msg="Failed to destroy network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.496811 containerd[1458]: time="2026-04-14T13:33:00.496114618Z" level=error msg="encountered an error cleaning up failed sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.501751 containerd[1458]: time="2026-04-14T13:33:00.499545598Z" level=error msg="encountered an error cleaning up failed sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.503322 containerd[1458]: time="2026-04-14T13:33:00.503142088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g5q72,Uid:fbc203ea-65cb-4880-91f1-00f13ee08f83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.509246 containerd[1458]: time="2026-04-14T13:33:00.509117685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbfq7,Uid:e0f11f62-5546-4397-955f-97b1110f25d7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.534079 kubelet[2512]: E0414 13:33:00.533835 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.534079 kubelet[2512]: E0414 13:33:00.534061 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g5q72" Apr 14 13:33:00.534272 kubelet[2512]: E0414 13:33:00.534089 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g5q72" Apr 14 13:33:00.534272 kubelet[2512]: E0414 13:33:00.534162 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-g5q72_kube-system(fbc203ea-65cb-4880-91f1-00f13ee08f83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-g5q72_kube-system(fbc203ea-65cb-4880-91f1-00f13ee08f83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g5q72" podUID="fbc203ea-65cb-4880-91f1-00f13ee08f83" Apr 14 13:33:00.537014 kubelet[2512]: E0414 13:33:00.536790 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.537014 kubelet[2512]: E0414 13:33:00.537005 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:33:00.537220 kubelet[2512]: E0414 13:33:00.537031 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbfq7" Apr 14 13:33:00.537337 kubelet[2512]: E0414 13:33:00.537215 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbfq7_calico-system(e0f11f62-5546-4397-955f-97b1110f25d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbfq7_calico-system(e0f11f62-5546-4397-955f-97b1110f25d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:33:00.589620 containerd[1458]: time="2026-04-14T13:33:00.589297138Z" level=error msg="Failed to destroy network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.591583 containerd[1458]: time="2026-04-14T13:33:00.590311550Z" level=error msg="encountered an error cleaning up failed sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.593354 containerd[1458]: time="2026-04-14T13:33:00.593323462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6dc49cc-7555d,Uid:4783cd1c-b4fb-4d25-b891-52cfc6659501,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.664007 kubelet[2512]: E0414 13:33:00.663852 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.675994 kubelet[2512]: E0414 13:33:00.675812 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" Apr 14 13:33:00.712117 kubelet[2512]: E0414 13:33:00.711756 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" Apr 14 13:33:00.712334 kubelet[2512]: E0414 13:33:00.712257 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fd6dc49cc-7555d_calico-system(4783cd1c-b4fb-4d25-b891-52cfc6659501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fd6dc49cc-7555d_calico-system(4783cd1c-b4fb-4d25-b891-52cfc6659501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" podUID="4783cd1c-b4fb-4d25-b891-52cfc6659501" Apr 14 13:33:00.940444 containerd[1458]: time="2026-04-14T13:33:00.939687052Z" level=error msg="Failed to destroy network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.947800 containerd[1458]: time="2026-04-14T13:33:00.944402762Z" level=error msg="encountered an error cleaning up failed sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.947800 containerd[1458]: time="2026-04-14T13:33:00.944465171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-bnqqz,Uid:8dce0864-7c1c-4c82-8be6-3d53a4d967af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.948244 kubelet[2512]: E0414 13:33:00.944975 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:00.948244 kubelet[2512]: E0414 13:33:00.945028 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" Apr 14 13:33:00.948244 kubelet[2512]: E0414 13:33:00.945045 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" Apr 14 13:33:00.948362 kubelet[2512]: E0414 13:33:00.945123 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677c4b66cd-bnqqz_calico-system(8dce0864-7c1c-4c82-8be6-3d53a4d967af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677c4b66cd-bnqqz_calico-system(8dce0864-7c1c-4c82-8be6-3d53a4d967af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" podUID="8dce0864-7c1c-4c82-8be6-3d53a4d967af" Apr 14 13:33:00.948362 kubelet[2512]: I0414 13:33:00.948300 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:00.958155 kubelet[2512]: I0414 13:33:00.956402 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:00.958458 containerd[1458]: time="2026-04-14T13:33:00.957605982Z" level=info msg="StopPodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\"" Apr 14 13:33:00.964619 containerd[1458]: time="2026-04-14T13:33:00.964457002Z" level=info msg="Ensure that sandbox 9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c in task-service has been cleanup successfully" Apr 14 13:33:00.965692 containerd[1458]: time="2026-04-14T13:33:00.965243257Z" level=info msg="StopPodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\"" Apr 14 13:33:00.979882 containerd[1458]: time="2026-04-14T13:33:00.978576688Z" level=info msg="Ensure that sandbox 485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4 in task-service has been cleanup successfully" Apr 14 13:33:01.025596 kubelet[2512]: I0414 13:33:01.024792 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:33:01.028434 containerd[1458]: time="2026-04-14T13:33:01.027866425Z" level=info msg="StopPodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\"" Apr 14 13:33:01.028434 containerd[1458]: time="2026-04-14T13:33:01.029210755Z" level=info msg="Ensure that sandbox a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1 in task-service has been cleanup successfully" Apr 14 13:33:01.032281 containerd[1458]: time="2026-04-14T13:33:01.031975141Z" level=error msg="Failed to destroy network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.038272 containerd[1458]: time="2026-04-14T13:33:01.035347694Z" level=error msg="encountered an error cleaning up failed sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.038272 containerd[1458]: time="2026-04-14T13:33:01.035445763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75dbfc9fc8-snl69,Uid:59b4dce6-3ea4-42d3-8deb-202db303fb14,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.038496 kubelet[2512]: E0414 13:33:01.035669 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.038496 kubelet[2512]: E0414 13:33:01.035765 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75dbfc9fc8-snl69" Apr 14 13:33:01.038496 kubelet[2512]: E0414 13:33:01.035788 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75dbfc9fc8-snl69" Apr 14 13:33:01.038613 kubelet[2512]: E0414 13:33:01.035845 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75dbfc9fc8-snl69_calico-system(59b4dce6-3ea4-42d3-8deb-202db303fb14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75dbfc9fc8-snl69_calico-system(59b4dce6-3ea4-42d3-8deb-202db303fb14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75dbfc9fc8-snl69" podUID="59b4dce6-3ea4-42d3-8deb-202db303fb14" Apr 14 13:33:01.042255 containerd[1458]: time="2026-04-14T13:33:01.042107424Z" level=error msg="Failed to destroy network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.044802 containerd[1458]: time="2026-04-14T13:33:01.044256254Z" level=error msg="encountered an error cleaning up failed sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.046482 containerd[1458]: time="2026-04-14T13:33:01.046445174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dswbh,Uid:373fc5da-85f1-463d-a6ba-0ede19c097b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.055002 kubelet[2512]: E0414 13:33:01.054860 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.055002 kubelet[2512]: E0414 13:33:01.054982 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dswbh" Apr 14 13:33:01.055642 kubelet[2512]: E0414 13:33:01.055004 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dswbh" Apr 14 13:33:01.055825 kubelet[2512]: E0414 13:33:01.055112 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dswbh_kube-system(373fc5da-85f1-463d-a6ba-0ede19c097b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dswbh_kube-system(373fc5da-85f1-463d-a6ba-0ede19c097b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dswbh" podUID="373fc5da-85f1-463d-a6ba-0ede19c097b3" Apr 14 13:33:01.084888 containerd[1458]: time="2026-04-14T13:33:01.084662414Z" level=error msg="Failed to destroy network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.085565 containerd[1458]: time="2026-04-14T13:33:01.085478158Z" level=error msg="encountered an error cleaning up failed sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.085653 containerd[1458]: time="2026-04-14T13:33:01.085573514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-p7zmd,Uid:7a48f0b0-2c86-41cd-b28b-7d4223f81409,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.101186 kubelet[2512]: E0414 13:33:01.100540 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.101706 kubelet[2512]: E0414 13:33:01.101629 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" Apr 14 13:33:01.101837 kubelet[2512]: E0414 13:33:01.101745 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" Apr 14 13:33:01.118068 kubelet[2512]: E0414 13:33:01.117972 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677c4b66cd-p7zmd_calico-system(7a48f0b0-2c86-41cd-b28b-7d4223f81409)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677c4b66cd-p7zmd_calico-system(7a48f0b0-2c86-41cd-b28b-7d4223f81409)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" podUID="7a48f0b0-2c86-41cd-b28b-7d4223f81409" Apr 14 13:33:01.143305 containerd[1458]: time="2026-04-14T13:33:01.140114944Z" level=error msg="Failed to destroy network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.143305 containerd[1458]: time="2026-04-14T13:33:01.140634500Z" level=error msg="encountered an error cleaning up failed sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.143305 containerd[1458]: time="2026-04-14T13:33:01.140708561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9cxz,Uid:c593b64b-dfef-4876-b0f3-e403e442c5f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.164898 kubelet[2512]: E0414 13:33:01.164842 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.190206 kubelet[2512]: E0414 13:33:01.189214 2512 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-w9cxz" Apr 14 13:33:01.190206 kubelet[2512]: E0414 13:33:01.189475 2512 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-w9cxz" Apr 14 13:33:01.194128 kubelet[2512]: E0414 13:33:01.192295 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-w9cxz_calico-system(c593b64b-dfef-4876-b0f3-e403e442c5f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-w9cxz_calico-system(c593b64b-dfef-4876-b0f3-e403e442c5f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-w9cxz" podUID="c593b64b-dfef-4876-b0f3-e403e442c5f4" Apr 14 13:33:01.227048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b-shm.mount: Deactivated successfully. Apr 14 13:33:01.228691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3-shm.mount: Deactivated successfully. Apr 14 13:33:01.232826 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c-shm.mount: Deactivated successfully. Apr 14 13:33:01.234363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf-shm.mount: Deactivated successfully. Apr 14 13:33:01.234448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7-shm.mount: Deactivated successfully. Apr 14 13:33:01.234518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c-shm.mount: Deactivated successfully. Apr 14 13:33:01.235113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4-shm.mount: Deactivated successfully. Apr 14 13:33:01.236771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1-shm.mount: Deactivated successfully. Apr 14 13:33:01.342628 containerd[1458]: time="2026-04-14T13:33:01.342507304Z" level=error msg="StopPodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" failed" error="failed to destroy network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.346540 kubelet[2512]: E0414 13:33:01.345651 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:01.347785 kubelet[2512]: E0414 13:33:01.347442 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c"} Apr 14 13:33:01.347785 kubelet[2512]: E0414 13:33:01.347608 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4783cd1c-b4fb-4d25-b891-52cfc6659501\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:01.347785 kubelet[2512]: E0414 13:33:01.347693 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4783cd1c-b4fb-4d25-b891-52cfc6659501\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" podUID="4783cd1c-b4fb-4d25-b891-52cfc6659501" Apr 14 13:33:01.356651 containerd[1458]: time="2026-04-14T13:33:01.356488031Z" level=error msg="StopPodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" failed" error="failed to destroy network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.361732 kubelet[2512]: E0414 13:33:01.361522 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:01.362274 kubelet[2512]: E0414 13:33:01.361698 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4"} Apr 14 13:33:01.363857 kubelet[2512]: E0414 13:33:01.363796 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbc203ea-65cb-4880-91f1-00f13ee08f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:01.366249 kubelet[2512]: E0414 13:33:01.364101 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbc203ea-65cb-4880-91f1-00f13ee08f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g5q72" podUID="fbc203ea-65cb-4880-91f1-00f13ee08f83" Apr 14 13:33:01.380812 containerd[1458]: time="2026-04-14T13:33:01.380404994Z" level=error msg="StopPodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" failed" error="failed to destroy network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:01.386989 kubelet[2512]: E0414 13:33:01.385757 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:33:01.386989 kubelet[2512]: E0414 13:33:01.385882 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1"} Apr 14 13:33:01.388459 kubelet[2512]: E0414 13:33:01.387778 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0f11f62-5546-4397-955f-97b1110f25d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:01.388810 kubelet[2512]: E0414 13:33:01.388565 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0f11f62-5546-4397-955f-97b1110f25d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbfq7" podUID="e0f11f62-5546-4397-955f-97b1110f25d7" Apr 14 13:33:01.844996 systemd[1]: run-containerd-runc-k8s.io-9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69-runc.XhVY3H.mount: Deactivated successfully. Apr 14 13:33:01.913411 kubelet[2512]: I0414 13:33:01.910375 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lsmxn" podStartSLOduration=9.113047851 podStartE2EDuration="38.910302935s" podCreationTimestamp="2026-04-14 13:32:23 +0000 UTC" firstStartedPulling="2026-04-14 13:32:24.986335176 +0000 UTC m=+29.822667836" lastFinishedPulling="2026-04-14 13:32:54.783590261 +0000 UTC m=+59.619922920" observedRunningTime="2026-04-14 13:33:01.846407384 +0000 UTC m=+66.682740073" watchObservedRunningTime="2026-04-14 13:33:01.910302935 +0000 UTC m=+66.746635608" Apr 14 13:33:02.077295 kubelet[2512]: I0414 13:33:02.074370 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:33:02.081636 containerd[1458]: time="2026-04-14T13:33:02.081499292Z" level=info msg="StopPodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\"" Apr 14 13:33:02.082308 containerd[1458]: time="2026-04-14T13:33:02.081820619Z" level=info msg="Ensure that sandbox 53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c in task-service has been cleanup successfully" Apr 14 13:33:02.096961 kubelet[2512]: I0414 13:33:02.096834 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:33:02.098544 containerd[1458]: time="2026-04-14T13:33:02.097668483Z" level=info msg="StopPodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\"" Apr 14 13:33:02.099402 containerd[1458]: time="2026-04-14T13:33:02.099212297Z" level=info msg="Ensure that sandbox 71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3 in task-service has been cleanup successfully" Apr 14 13:33:02.127154 kubelet[2512]: I0414 13:33:02.126532 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:33:02.132668 containerd[1458]: time="2026-04-14T13:33:02.132145040Z" level=info msg="StopPodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\"" Apr 14 13:33:02.136648 containerd[1458]: time="2026-04-14T13:33:02.136505038Z" level=info msg="Ensure that sandbox 2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b in task-service has been cleanup successfully" Apr 14 13:33:02.247601 kubelet[2512]: I0414 13:33:02.246487 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:33:02.251320 containerd[1458]: time="2026-04-14T13:33:02.250170815Z" level=info msg="StopPodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\"" Apr 14 13:33:02.253265 containerd[1458]: time="2026-04-14T13:33:02.253133693Z" level=info msg="Ensure that sandbox 5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7 in task-service has been cleanup successfully" Apr 14 13:33:02.292811 kubelet[2512]: I0414 13:33:02.292145 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:33:02.357504 containerd[1458]: time="2026-04-14T13:33:02.357274950Z" level=info msg="StopPodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\"" Apr 14 13:33:02.377085 containerd[1458]: time="2026-04-14T13:33:02.376308083Z" level=info msg="Ensure that sandbox a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf in task-service has been cleanup successfully" Apr 14 13:33:02.425264 containerd[1458]: time="2026-04-14T13:33:02.422519472Z" level=error msg="StopPodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" failed" error="failed to destroy network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:02.434343 kubelet[2512]: E0414 13:33:02.434175 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:33:02.436434 kubelet[2512]: E0414 13:33:02.436346 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b"} Apr 14 13:33:02.440470 kubelet[2512]: E0414 13:33:02.439629 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8dce0864-7c1c-4c82-8be6-3d53a4d967af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:02.450296 kubelet[2512]: E0414 13:33:02.448668 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8dce0864-7c1c-4c82-8be6-3d53a4d967af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" podUID="8dce0864-7c1c-4c82-8be6-3d53a4d967af" Apr 14 13:33:02.462829 containerd[1458]: time="2026-04-14T13:33:02.462644985Z" level=error msg="StopPodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" failed" error="failed to destroy network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:02.470892 kubelet[2512]: E0414 13:33:02.470488 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:33:02.473726 kubelet[2512]: E0414 13:33:02.471241 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c"} Apr 14 13:33:02.474604 kubelet[2512]: E0414 13:33:02.474308 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59b4dce6-3ea4-42d3-8deb-202db303fb14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:02.474604 kubelet[2512]: E0414 13:33:02.474466 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59b4dce6-3ea4-42d3-8deb-202db303fb14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75dbfc9fc8-snl69" podUID="59b4dce6-3ea4-42d3-8deb-202db303fb14" Apr 14 13:33:02.590571 containerd[1458]: time="2026-04-14T13:33:02.590364487Z" level=error msg="StopPodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" failed" error="failed to destroy network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:02.596997 kubelet[2512]: E0414 13:33:02.595422 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:33:02.596997 kubelet[2512]: E0414 13:33:02.595491 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3"} Apr 14 13:33:02.596997 kubelet[2512]: E0414 13:33:02.595592 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c593b64b-dfef-4876-b0f3-e403e442c5f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:02.596997 kubelet[2512]: E0414 13:33:02.595629 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c593b64b-dfef-4876-b0f3-e403e442c5f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-w9cxz" podUID="c593b64b-dfef-4876-b0f3-e403e442c5f4" Apr 14 13:33:02.623288 containerd[1458]: time="2026-04-14T13:33:02.622511904Z" level=error msg="StopPodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" failed" error="failed to destroy network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:02.626428 kubelet[2512]: E0414 13:33:02.626093 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:33:02.627974 kubelet[2512]: E0414 13:33:02.627658 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7"} Apr 14 13:33:02.632296 kubelet[2512]: E0414 13:33:02.629633 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a48f0b0-2c86-41cd-b28b-7d4223f81409\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:02.633491 kubelet[2512]: E0414 13:33:02.632820 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a48f0b0-2c86-41cd-b28b-7d4223f81409\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" podUID="7a48f0b0-2c86-41cd-b28b-7d4223f81409" Apr 14 13:33:02.778801 containerd[1458]: time="2026-04-14T13:33:02.778530220Z" level=error msg="StopPodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" failed" error="failed to destroy network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:33:02.781295 kubelet[2512]: E0414 13:33:02.781089 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:33:02.781295 kubelet[2512]: E0414 13:33:02.781165 2512 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf"} Apr 14 13:33:02.781295 kubelet[2512]: E0414 13:33:02.781211 2512 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"373fc5da-85f1-463d-a6ba-0ede19c097b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:33:02.781295 kubelet[2512]: E0414 13:33:02.781248 2512 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"373fc5da-85f1-463d-a6ba-0ede19c097b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dswbh" podUID="373fc5da-85f1-463d-a6ba-0ede19c097b3" Apr 14 13:33:03.325461 containerd[1458]: time="2026-04-14T13:33:03.323979424Z" level=info msg="StopPodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\"" Apr 14 13:33:03.667275 kubelet[2512]: E0414 13:33:03.666152 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:03.855 [INFO][4002] cni-plugin/k8s.go 639: Endpoint was modified before it could be deleted. Retrying... ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75dbfc9fc8--snl69-eth0", GenerateName:"whisker-75dbfc9fc8-", Namespace:"calico-system", SelfLink:"", UID:"59b4dce6-3ea4-42d3-8deb-202db303fb14", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75dbfc9fc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-75dbfc9fc8-snl69", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77fc33b1ead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.014 [INFO][4002] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.015 [INFO][4002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" iface="eth0" netns="/var/run/netns/cni-b6ef0ce4-db90-2bc1-1425-3313aed8ef2e" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.015 [INFO][4002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" iface="eth0" netns="/var/run/netns/cni-b6ef0ce4-db90-2bc1-1425-3313aed8ef2e" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.033 [INFO][4002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" iface="eth0" netns="/var/run/netns/cni-b6ef0ce4-db90-2bc1-1425-3313aed8ef2e" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.034 [INFO][4002] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.034 [INFO][4002] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.213 [INFO][4011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.217 [INFO][4011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.217 [INFO][4011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.269 [WARNING][4011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.271 [INFO][4011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.392 [INFO][4011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:04.414527 containerd[1458]: 2026-04-14 13:33:04.400 [INFO][4002] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:33:04.422551 containerd[1458]: time="2026-04-14T13:33:04.421559209Z" level=info msg="TearDown network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" successfully" Apr 14 13:33:04.422551 containerd[1458]: time="2026-04-14T13:33:04.421654512Z" level=info msg="StopPodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" returns successfully" Apr 14 13:33:04.425405 systemd[1]: run-netns-cni\x2db6ef0ce4\x2ddb90\x2d2bc1\x2d1425\x2d3313aed8ef2e.mount: Deactivated successfully. Apr 14 13:33:04.581161 kubelet[2512]: I0414 13:33:04.580496 2512 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-backend-key-pair\") pod \"59b4dce6-3ea4-42d3-8deb-202db303fb14\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " Apr 14 13:33:04.586521 kubelet[2512]: I0414 13:33:04.585438 2512 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-nginx-config\") pod \"59b4dce6-3ea4-42d3-8deb-202db303fb14\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " Apr 14 13:33:04.587155 kubelet[2512]: I0414 13:33:04.586569 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "59b4dce6-3ea4-42d3-8deb-202db303fb14" (UID: "59b4dce6-3ea4-42d3-8deb-202db303fb14"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 13:33:04.627455 kubelet[2512]: I0414 13:33:04.622050 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "59b4dce6-3ea4-42d3-8deb-202db303fb14" (UID: "59b4dce6-3ea4-42d3-8deb-202db303fb14"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 13:33:04.627455 kubelet[2512]: I0414 13:33:04.625626 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "59b4dce6-3ea4-42d3-8deb-202db303fb14" (UID: "59b4dce6-3ea4-42d3-8deb-202db303fb14"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 13:33:04.627455 kubelet[2512]: I0414 13:33:04.625821 2512 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-ca-bundle\") pod \"59b4dce6-3ea4-42d3-8deb-202db303fb14\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " Apr 14 13:33:04.627455 kubelet[2512]: I0414 13:33:04.626961 2512 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl9tr\" (UniqueName: \"kubernetes.io/projected/59b4dce6-3ea4-42d3-8deb-202db303fb14-kube-api-access-dl9tr\") pod \"59b4dce6-3ea4-42d3-8deb-202db303fb14\" (UID: \"59b4dce6-3ea4-42d3-8deb-202db303fb14\") " Apr 14 13:33:04.628433 kubelet[2512]: I0414 13:33:04.627508 2512 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 14 13:33:04.628433 kubelet[2512]: I0414 13:33:04.627523 2512 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59b4dce6-3ea4-42d3-8deb-202db303fb14-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 14 13:33:04.628433 kubelet[2512]: I0414 13:33:04.627573 2512 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/59b4dce6-3ea4-42d3-8deb-202db303fb14-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 14 13:33:04.632865 systemd[1]: var-lib-kubelet-pods-59b4dce6\x2d3ea4\x2d42d3\x2d8deb\x2d202db303fb14-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 14 13:33:04.642603 kubelet[2512]: I0414 13:33:04.641560 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b4dce6-3ea4-42d3-8deb-202db303fb14-kube-api-access-dl9tr" (OuterVolumeSpecName: "kube-api-access-dl9tr") pod "59b4dce6-3ea4-42d3-8deb-202db303fb14" (UID: "59b4dce6-3ea4-42d3-8deb-202db303fb14"). InnerVolumeSpecName "kube-api-access-dl9tr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 13:33:04.652789 systemd[1]: var-lib-kubelet-pods-59b4dce6\x2d3ea4\x2d42d3\x2d8deb\x2d202db303fb14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddl9tr.mount: Deactivated successfully. Apr 14 13:33:04.783142 kubelet[2512]: I0414 13:33:04.781727 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dl9tr\" (UniqueName: \"kubernetes.io/projected/59b4dce6-3ea4-42d3-8deb-202db303fb14-kube-api-access-dl9tr\") on node \"localhost\" DevicePath \"\"" Apr 14 13:33:05.463140 systemd[1]: Removed slice kubepods-besteffort-pod59b4dce6_3ea4_42d3_8deb_202db303fb14.slice - libcontainer container kubepods-besteffort-pod59b4dce6_3ea4_42d3_8deb_202db303fb14.slice. Apr 14 13:33:06.889993 kubelet[2512]: I0414 13:33:06.889490 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/65762736-1dcf-4480-9ec3-4eef76aa22dc-nginx-config\") pod \"whisker-64998b8bc-8ffc5\" (UID: \"65762736-1dcf-4480-9ec3-4eef76aa22dc\") " pod="calico-system/whisker-64998b8bc-8ffc5" Apr 14 13:33:06.932656 kubelet[2512]: I0414 13:33:06.930480 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65762736-1dcf-4480-9ec3-4eef76aa22dc-whisker-ca-bundle\") pod \"whisker-64998b8bc-8ffc5\" (UID: \"65762736-1dcf-4480-9ec3-4eef76aa22dc\") " pod="calico-system/whisker-64998b8bc-8ffc5" Apr 14 13:33:06.932656 kubelet[2512]: I0414 13:33:06.931466 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65762736-1dcf-4480-9ec3-4eef76aa22dc-whisker-backend-key-pair\") pod \"whisker-64998b8bc-8ffc5\" (UID: \"65762736-1dcf-4480-9ec3-4eef76aa22dc\") " pod="calico-system/whisker-64998b8bc-8ffc5" Apr 14 13:33:06.932656 kubelet[2512]: I0414 13:33:06.931521 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b774x\" (UniqueName: \"kubernetes.io/projected/65762736-1dcf-4480-9ec3-4eef76aa22dc-kube-api-access-b774x\") pod \"whisker-64998b8bc-8ffc5\" (UID: \"65762736-1dcf-4480-9ec3-4eef76aa22dc\") " pod="calico-system/whisker-64998b8bc-8ffc5" Apr 14 13:33:06.963824 systemd[1]: Created slice kubepods-besteffort-pod65762736_1dcf_4480_9ec3_4eef76aa22dc.slice - libcontainer container kubepods-besteffort-pod65762736_1dcf_4480_9ec3_4eef76aa22dc.slice. Apr 14 13:33:07.583042 kubelet[2512]: E0414 13:33:07.582744 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:07.616600 containerd[1458]: time="2026-04-14T13:33:07.616446828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64998b8bc-8ffc5,Uid:65762736-1dcf-4480-9ec3-4eef76aa22dc,Namespace:calico-system,Attempt:0,}" Apr 14 13:33:07.643500 kubelet[2512]: I0414 13:33:07.643024 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b4dce6-3ea4-42d3-8deb-202db303fb14" path="/var/lib/kubelet/pods/59b4dce6-3ea4-42d3-8deb-202db303fb14/volumes" Apr 14 13:33:09.701983 kernel: calico-node[4117]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 14 13:33:10.595186 systemd-networkd[1385]: cali545a6c888f9: Link UP Apr 14 13:33:10.599370 systemd-networkd[1385]: cali545a6c888f9: Gained carrier Apr 14 13:33:10.706752 kubelet[2512]: E0414 13:33:10.699628 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:07.971 [ERROR][4140] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:08.236 [INFO][4140] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64998b8bc--8ffc5-eth0 whisker-64998b8bc- calico-system 65762736-1dcf-4480-9ec3-4eef76aa22dc 1066 0 2026-04-14 13:33:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64998b8bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64998b8bc-8ffc5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali545a6c888f9 [] [] }} ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:08.236 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:08.956 [INFO][4158] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" HandleID="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Workload="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.147 [INFO][4158] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" HandleID="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Workload="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b3e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64998b8bc-8ffc5", "timestamp":"2026-04-14 13:33:08.956773528 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000330420)} Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.147 [INFO][4158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.157 [INFO][4158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.161 [INFO][4158] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.235 [INFO][4158] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.360 [INFO][4158] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.665 [INFO][4158] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.768 [INFO][4158] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.954 [INFO][4158] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:09.959 [INFO][4158] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:10.072 [INFO][4158] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:10.316 [INFO][4158] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:10.517 [INFO][4158] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:10.519 [INFO][4158] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" host="localhost" Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:10.519 [INFO][4158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:11.030506 containerd[1458]: 2026-04-14 13:33:10.520 [INFO][4158] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" HandleID="k8s-pod-network.234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Workload="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.049890 containerd[1458]: 2026-04-14 13:33:10.527 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64998b8bc--8ffc5-eth0", GenerateName:"whisker-64998b8bc-", Namespace:"calico-system", SelfLink:"", UID:"65762736-1dcf-4480-9ec3-4eef76aa22dc", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64998b8bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64998b8bc-8ffc5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali545a6c888f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:11.049890 containerd[1458]: 2026-04-14 13:33:10.528 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.049890 containerd[1458]: 2026-04-14 13:33:10.528 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali545a6c888f9 ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.049890 containerd[1458]: 2026-04-14 13:33:10.638 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.049890 containerd[1458]: 2026-04-14 13:33:10.649 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64998b8bc--8ffc5-eth0", GenerateName:"whisker-64998b8bc-", Namespace:"calico-system", SelfLink:"", UID:"65762736-1dcf-4480-9ec3-4eef76aa22dc", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64998b8bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba", Pod:"whisker-64998b8bc-8ffc5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali545a6c888f9", MAC:"ca:b7:d2:a2:2c:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:11.049890 containerd[1458]: 2026-04-14 13:33:10.980 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba" Namespace="calico-system" Pod="whisker-64998b8bc-8ffc5" WorkloadEndpoint="localhost-k8s-whisker--64998b8bc--8ffc5-eth0" Apr 14 13:33:11.286567 containerd[1458]: time="2026-04-14T13:33:11.285784193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:11.290550 containerd[1458]: time="2026-04-14T13:33:11.287786116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:11.346428 containerd[1458]: time="2026-04-14T13:33:11.341858294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:11.346428 containerd[1458]: time="2026-04-14T13:33:11.345445833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:11.496624 systemd[1]: Started cri-containerd-234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba.scope - libcontainer container 234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba. Apr 14 13:33:11.587058 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:11.758261 systemd-networkd[1385]: vxlan.calico: Link UP Apr 14 13:33:11.758271 systemd-networkd[1385]: vxlan.calico: Gained carrier Apr 14 13:33:11.771675 containerd[1458]: time="2026-04-14T13:33:11.771544743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64998b8bc-8ffc5,Uid:65762736-1dcf-4480-9ec3-4eef76aa22dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba\"" Apr 14 13:33:11.782886 containerd[1458]: time="2026-04-14T13:33:11.782744754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 14 13:33:11.864625 systemd-networkd[1385]: cali545a6c888f9: Gained IPv6LL Apr 14 13:33:12.598549 containerd[1458]: time="2026-04-14T13:33:12.598191920Z" level=info msg="StopPodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\"" Apr 14 13:33:12.601044 containerd[1458]: time="2026-04-14T13:33:12.600504995Z" level=info msg="StopPodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\"" Apr 14 13:33:13.410259 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.454 [INFO][4341] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.459 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" iface="eth0" netns="/var/run/netns/cni-f6e6418b-d140-dfdd-d294-ae8154b668d0" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.459 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" iface="eth0" netns="/var/run/netns/cni-f6e6418b-d140-dfdd-d294-ae8154b668d0" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.459 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" iface="eth0" netns="/var/run/netns/cni-f6e6418b-d140-dfdd-d294-ae8154b668d0" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.460 [INFO][4341] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.460 [INFO][4341] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.846 [INFO][4380] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.846 [INFO][4380] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:13.847 [INFO][4380] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:14.048 [WARNING][4380] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:14.049 [INFO][4380] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:14.208 [INFO][4380] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:14.235073 containerd[1458]: 2026-04-14 13:33:14.225 [INFO][4341] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:14.235073 containerd[1458]: time="2026-04-14T13:33:14.234674591Z" level=info msg="TearDown network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" successfully" Apr 14 13:33:14.235073 containerd[1458]: time="2026-04-14T13:33:14.234803310Z" level=info msg="StopPodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" returns successfully" Apr 14 13:33:14.245487 systemd[1]: run-netns-cni\x2df6e6418b\x2dd140\x2ddfdd\x2dd294\x2dae8154b668d0.mount: Deactivated successfully. Apr 14 13:33:14.267868 kubelet[2512]: E0414 13:33:14.267713 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:14.289285 containerd[1458]: time="2026-04-14T13:33:14.288631439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g5q72,Uid:fbc203ea-65cb-4880-91f1-00f13ee08f83,Namespace:kube-system,Attempt:1,}" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:13.373 [INFO][4338] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:13.382 [INFO][4338] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" iface="eth0" netns="/var/run/netns/cni-aa2149b3-fc65-9364-dc34-386851a63860" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:13.388 [INFO][4338] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" iface="eth0" netns="/var/run/netns/cni-aa2149b3-fc65-9364-dc34-386851a63860" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:13.388 [INFO][4338] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" iface="eth0" netns="/var/run/netns/cni-aa2149b3-fc65-9364-dc34-386851a63860" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:13.391 [INFO][4338] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:13.391 [INFO][4338] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.172 [INFO][4372] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.173 [INFO][4372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.210 [INFO][4372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.394 [WARNING][4372] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.398 [INFO][4372] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.516 [INFO][4372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:14.528677 containerd[1458]: 2026-04-14 13:33:14.521 [INFO][4338] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:33:14.535686 containerd[1458]: time="2026-04-14T13:33:14.535621796Z" level=info msg="TearDown network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" successfully" Apr 14 13:33:14.535686 containerd[1458]: time="2026-04-14T13:33:14.535676581Z" level=info msg="StopPodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" returns successfully" Apr 14 13:33:14.550207 systemd[1]: run-netns-cni\x2daa2149b3\x2dfc65\x2d9364\x2ddc34\x2d386851a63860.mount: Deactivated successfully. Apr 14 13:33:14.582340 containerd[1458]: time="2026-04-14T13:33:14.579260444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbfq7,Uid:e0f11f62-5546-4397-955f-97b1110f25d7,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:14.639317 containerd[1458]: time="2026-04-14T13:33:14.639117438Z" level=info msg="StopPodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\"" Apr 14 13:33:14.706625 containerd[1458]: time="2026-04-14T13:33:14.706515010Z" level=info msg="StopPodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\"" Apr 14 13:33:14.710651 containerd[1458]: time="2026-04-14T13:33:14.708280840Z" level=info msg="StopPodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\"" Apr 14 13:33:15.856806 containerd[1458]: time="2026-04-14T13:33:15.856226198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:15.876613 containerd[1458]: time="2026-04-14T13:33:15.876231293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 14 13:33:15.884850 containerd[1458]: time="2026-04-14T13:33:15.884498906Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:15.936204 containerd[1458]: time="2026-04-14T13:33:15.936117058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:15.938295 containerd[1458]: time="2026-04-14T13:33:15.936802371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 4.153924653s" Apr 14 13:33:15.938295 containerd[1458]: time="2026-04-14T13:33:15.936830384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 14 13:33:16.183973 containerd[1458]: time="2026-04-14T13:33:16.177954504Z" level=info msg="CreateContainer within sandbox \"234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 14 13:33:16.486053 containerd[1458]: time="2026-04-14T13:33:16.485490691Z" level=info msg="CreateContainer within sandbox \"234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"773a3c760730a4a67ceb077cc161b461d253ebd0a7c9ae79481294060cc3373e\"" Apr 14 13:33:16.530122 containerd[1458]: time="2026-04-14T13:33:16.529883157Z" level=info msg="StartContainer for \"773a3c760730a4a67ceb077cc161b461d253ebd0a7c9ae79481294060cc3373e\"" Apr 14 13:33:16.706851 containerd[1458]: time="2026-04-14T13:33:16.706411307Z" level=info msg="StopPodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\"" Apr 14 13:33:17.147720 systemd[1]: Started cri-containerd-773a3c760730a4a67ceb077cc161b461d253ebd0a7c9ae79481294060cc3373e.scope - libcontainer container 773a3c760730a4a67ceb077cc161b461d253ebd0a7c9ae79481294060cc3373e. Apr 14 13:33:17.693140 containerd[1458]: time="2026-04-14T13:33:17.691796038Z" level=info msg="StartContainer for \"773a3c760730a4a67ceb077cc161b461d253ebd0a7c9ae79481294060cc3373e\" returns successfully" Apr 14 13:33:17.794147 containerd[1458]: time="2026-04-14T13:33:17.792657403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 14 13:33:18.601217 kubelet[2512]: E0414 13:33:18.599062 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:18.677035 containerd[1458]: time="2026-04-14T13:33:18.676797663Z" level=info msg="StopPodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\"" Apr 14 13:33:19.073473 systemd-networkd[1385]: cali2d993e4c2ad: Link UP Apr 14 13:33:19.088551 systemd-networkd[1385]: cali2d993e4c2ad: Gained carrier Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:15.081 [INFO][4399] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--g5q72-eth0 coredns-66bc5c9577- kube-system fbc203ea-65cb-4880-91f1-00f13ee08f83 1088 0 2026-04-14 13:32:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-g5q72 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d993e4c2ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:15.081 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:16.349 [INFO][4478] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" HandleID="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:16.820 [INFO][4478] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" HandleID="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c02d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-g5q72", "timestamp":"2026-04-14 13:33:16.349622457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002aa160)} Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:16.825 [INFO][4478] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:16.867 [INFO][4478] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:16.868 [INFO][4478] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:17.140 [INFO][4478] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:17.645 [INFO][4478] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.060 [INFO][4478] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.250 [INFO][4478] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.366 [INFO][4478] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.367 [INFO][4478] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.493 [INFO][4478] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043 Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.661 [INFO][4478] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.892 [INFO][4478] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.898 [INFO][4478] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" host="localhost" Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.923 [INFO][4478] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:19.527604 containerd[1458]: 2026-04-14 13:33:18.930 [INFO][4478] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" HandleID="k8s-pod-network.28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.533571 containerd[1458]: 2026-04-14 13:33:18.951 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g5q72-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc203ea-65cb-4880-91f1-00f13ee08f83", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-g5q72", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d993e4c2ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:19.533571 containerd[1458]: 2026-04-14 13:33:18.955 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.533571 containerd[1458]: 2026-04-14 13:33:18.955 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d993e4c2ad ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.533571 containerd[1458]: 2026-04-14 13:33:19.097 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.533571 containerd[1458]: 2026-04-14 13:33:19.113 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g5q72-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc203ea-65cb-4880-91f1-00f13ee08f83", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043", Pod:"coredns-66bc5c9577-g5q72", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d993e4c2ad", MAC:"ee:7c:a9:cf:5d:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:19.543183 containerd[1458]: 2026-04-14 13:33:19.512 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043" Namespace="kube-system" Pod="coredns-66bc5c9577-g5q72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.027 [INFO][4421] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.040 [INFO][4421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" iface="eth0" netns="/var/run/netns/cni-aff3d040-fd45-6406-a8e1-3ec2e4604ca7" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.042 [INFO][4421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" iface="eth0" netns="/var/run/netns/cni-aff3d040-fd45-6406-a8e1-3ec2e4604ca7" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.048 [INFO][4421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" iface="eth0" netns="/var/run/netns/cni-aff3d040-fd45-6406-a8e1-3ec2e4604ca7" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.057 [INFO][4421] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.064 [INFO][4421] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.932 [INFO][4490] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:16.932 [INFO][4490] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:18.903 [INFO][4490] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:19.332 [WARNING][4490] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:19.339 [INFO][4490] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:19.520 [INFO][4490] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:19.588230 containerd[1458]: 2026-04-14 13:33:19.563 [INFO][4421] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:33:19.593692 systemd[1]: run-netns-cni\x2daff3d040\x2dfd45\x2d6406\x2da8e1\x2d3ec2e4604ca7.mount: Deactivated successfully. Apr 14 13:33:19.673638 containerd[1458]: time="2026-04-14T13:33:19.662439944Z" level=info msg="TearDown network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" successfully" Apr 14 13:33:19.673638 containerd[1458]: time="2026-04-14T13:33:19.662885793Z" level=info msg="StopPodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" returns successfully" Apr 14 13:33:19.697320 containerd[1458]: time="2026-04-14T13:33:19.696256681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-p7zmd,Uid:7a48f0b0-2c86-41cd-b28b-7d4223f81409,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:19.857452 containerd[1458]: time="2026-04-14T13:33:19.856565107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:19.857452 containerd[1458]: time="2026-04-14T13:33:19.856671772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:19.857452 containerd[1458]: time="2026-04-14T13:33:19.856684631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:19.857452 containerd[1458]: time="2026-04-14T13:33:19.856818039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:20.065291 systemd[1]: run-containerd-runc-k8s.io-28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043-runc.NntzzH.mount: Deactivated successfully. Apr 14 13:33:20.103283 systemd[1]: Started cri-containerd-28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043.scope - libcontainer container 28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043. Apr 14 13:33:20.322636 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:16.068 [INFO][4446] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:16.142 [INFO][4446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" iface="eth0" netns="/var/run/netns/cni-80ae3dde-b30b-6780-f3e6-73280852f332" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:16.143 [INFO][4446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" iface="eth0" netns="/var/run/netns/cni-80ae3dde-b30b-6780-f3e6-73280852f332" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:16.143 [INFO][4446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" iface="eth0" netns="/var/run/netns/cni-80ae3dde-b30b-6780-f3e6-73280852f332" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:16.144 [INFO][4446] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:16.145 [INFO][4446] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:17.069 [INFO][4492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:17.070 [INFO][4492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:19.522 [INFO][4492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:20.253 [WARNING][4492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:20.261 [INFO][4492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:20.489 [INFO][4492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:20.592807 containerd[1458]: 2026-04-14 13:33:20.512 [INFO][4446] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:33:20.632158 containerd[1458]: time="2026-04-14T13:33:20.601179621Z" level=info msg="TearDown network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" successfully" Apr 14 13:33:20.651814 containerd[1458]: time="2026-04-14T13:33:20.601510027Z" level=info msg="StopPodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" returns successfully" Apr 14 13:33:20.669485 systemd[1]: run-netns-cni\x2d80ae3dde\x2db30b\x2d6780\x2df3e6\x2d73280852f332.mount: Deactivated successfully. Apr 14 13:33:20.683605 containerd[1458]: time="2026-04-14T13:33:20.683369951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g5q72,Uid:fbc203ea-65cb-4880-91f1-00f13ee08f83,Namespace:kube-system,Attempt:1,} returns sandbox id \"28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043\"" Apr 14 13:33:20.762462 kubelet[2512]: E0414 13:33:20.760258 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:20.823597 systemd-networkd[1385]: cali2d993e4c2ad: Gained IPv6LL Apr 14 13:33:20.898461 containerd[1458]: time="2026-04-14T13:33:20.895886568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9cxz,Uid:c593b64b-dfef-4876-b0f3-e403e442c5f4,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:20.947802 containerd[1458]: time="2026-04-14T13:33:20.947628882Z" level=info msg="CreateContainer within sandbox \"28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 13:33:21.339547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005385731.mount: Deactivated successfully. Apr 14 13:33:21.367386 containerd[1458]: time="2026-04-14T13:33:21.367002479Z" level=info msg="CreateContainer within sandbox \"28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b55403086df5d83f89967773606b2afc71262401b79c4be4fb65b22072f6f842\"" Apr 14 13:33:21.393104 containerd[1458]: time="2026-04-14T13:33:21.392965601Z" level=info msg="StartContainer for \"b55403086df5d83f89967773606b2afc71262401b79c4be4fb65b22072f6f842\"" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:16.495 [INFO][4456] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:16.496 [INFO][4456] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" iface="eth0" netns="/var/run/netns/cni-b0f45281-02df-6da8-173d-ac32fcb42876" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:16.496 [INFO][4456] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" iface="eth0" netns="/var/run/netns/cni-b0f45281-02df-6da8-173d-ac32fcb42876" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:16.496 [INFO][4456] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" iface="eth0" netns="/var/run/netns/cni-b0f45281-02df-6da8-173d-ac32fcb42876" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:16.496 [INFO][4456] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:16.496 [INFO][4456] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:17.494 [INFO][4505] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:17.528 [INFO][4505] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:20.486 [INFO][4505] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:21.421 [WARNING][4505] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:21.422 [INFO][4505] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:21.682 [INFO][4505] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:21.996848 containerd[1458]: 2026-04-14 13:33:21.943 [INFO][4456] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:22.068537 containerd[1458]: time="2026-04-14T13:33:22.068268183Z" level=info msg="TearDown network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" successfully" Apr 14 13:33:22.068537 containerd[1458]: time="2026-04-14T13:33:22.068417663Z" level=info msg="StopPodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" returns successfully" Apr 14 13:33:22.061751 systemd[1]: run-netns-cni\x2db0f45281\x2d02df\x2d6da8\x2d173d\x2dac32fcb42876.mount: Deactivated successfully. Apr 14 13:33:22.136010 containerd[1458]: time="2026-04-14T13:33:22.134751500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6dc49cc-7555d,Uid:4783cd1c-b4fb-4d25-b891-52cfc6659501,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:22.390326 systemd[1]: Started cri-containerd-b55403086df5d83f89967773606b2afc71262401b79c4be4fb65b22072f6f842.scope - libcontainer container b55403086df5d83f89967773606b2afc71262401b79c4be4fb65b22072f6f842. Apr 14 13:33:22.921055 containerd[1458]: time="2026-04-14T13:33:22.920412962Z" level=info msg="StartContainer for \"b55403086df5d83f89967773606b2afc71262401b79c4be4fb65b22072f6f842\" returns successfully" Apr 14 13:33:23.366732 kubelet[2512]: E0414 13:33:23.366202 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:24.522985 kubelet[2512]: E0414 13:33:24.521208 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:24.833101 kubelet[2512]: I0414 13:33:24.832431 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g5q72" podStartSLOduration=84.832388133 podStartE2EDuration="1m24.832388133s" podCreationTimestamp="2026-04-14 13:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:33:24.694656865 +0000 UTC m=+89.530989533" watchObservedRunningTime="2026-04-14 13:33:24.832388133 +0000 UTC m=+89.668720806" Apr 14 13:33:25.602018 kubelet[2512]: E0414 13:33:25.600598 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:25.859406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465237469.mount: Deactivated successfully. Apr 14 13:33:26.093159 containerd[1458]: time="2026-04-14T13:33:26.087350250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:26.093159 containerd[1458]: time="2026-04-14T13:33:26.091352827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 14 13:33:26.153461 containerd[1458]: time="2026-04-14T13:33:26.095322422Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:26.164203 containerd[1458]: time="2026-04-14T13:33:26.163478557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:26.165036 containerd[1458]: time="2026-04-14T13:33:26.164964317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 8.372183129s" Apr 14 13:33:26.165257 containerd[1458]: time="2026-04-14T13:33:26.165208319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 14 13:33:26.219138 containerd[1458]: time="2026-04-14T13:33:26.216588211Z" level=info msg="CreateContainer within sandbox \"234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 14 13:33:26.429297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183296892.mount: Deactivated successfully. Apr 14 13:33:26.462715 containerd[1458]: time="2026-04-14T13:33:26.461885339Z" level=info msg="CreateContainer within sandbox \"234696be77e8e655f2ac4799c3f94a4b9ae7444fb5a577751ff2328b4e72d0ba\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"65ba849391b3398c29fb5163247f11e6b0f0e2254fdef49f7abed86124f5150b\"" Apr 14 13:33:26.474291 containerd[1458]: time="2026-04-14T13:33:26.474186707Z" level=info msg="StartContainer for \"65ba849391b3398c29fb5163247f11e6b0f0e2254fdef49f7abed86124f5150b\"" Apr 14 13:33:26.882306 systemd[1]: Started cri-containerd-65ba849391b3398c29fb5163247f11e6b0f0e2254fdef49f7abed86124f5150b.scope - libcontainer container 65ba849391b3398c29fb5163247f11e6b0f0e2254fdef49f7abed86124f5150b. Apr 14 13:33:26.963301 kubelet[2512]: E0414 13:33:26.962836 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:27.131645 systemd-networkd[1385]: cali2b420826253: Link UP Apr 14 13:33:27.170433 systemd-networkd[1385]: cali2b420826253: Gained carrier Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.054 [INFO][4536] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.073 [INFO][4536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" iface="eth0" netns="/var/run/netns/cni-c77816fe-7615-9d17-5f03-98f6252f9a32" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.073 [INFO][4536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" iface="eth0" netns="/var/run/netns/cni-c77816fe-7615-9d17-5f03-98f6252f9a32" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.075 [INFO][4536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" iface="eth0" netns="/var/run/netns/cni-c77816fe-7615-9d17-5f03-98f6252f9a32" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.076 [INFO][4536] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.076 [INFO][4536] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.675 [INFO][4594] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:18.688 [INFO][4594] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:26.959 [INFO][4594] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:27.480 [WARNING][4594] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:27.489 [INFO][4594] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:27.541 [INFO][4594] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:27.547649 containerd[1458]: 2026-04-14 13:33:27.543 [INFO][4536] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:33:27.580020 containerd[1458]: time="2026-04-14T13:33:27.548163578Z" level=info msg="TearDown network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" successfully" Apr 14 13:33:27.580020 containerd[1458]: time="2026-04-14T13:33:27.548296974Z" level=info msg="StopPodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" returns successfully" Apr 14 13:33:27.582661 systemd[1]: run-netns-cni\x2dc77816fe\x2d7615\x2d9d17\x2d5f03\x2d98f6252f9a32.mount: Deactivated successfully. Apr 14 13:33:27.715517 containerd[1458]: time="2026-04-14T13:33:27.714386453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-bnqqz,Uid:8dce0864-7c1c-4c82-8be6-3d53a4d967af,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:15.791 [INFO][4464] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dbfq7-eth0 csi-node-driver- calico-system e0f11f62-5546-4397-955f-97b1110f25d7 1087 0 2026-04-14 13:32:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dbfq7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2b420826253 [] [] }} ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:15.822 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:17.494 [INFO][4510] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" HandleID="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:17.714 [INFO][4510] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" HandleID="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dbfq7", "timestamp":"2026-04-14 13:33:17.494758739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000564420)} Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:17.728 [INFO][4510] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:21.697 [INFO][4510] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:21.726 [INFO][4510] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:21.947 [INFO][4510] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:22.451 [INFO][4510] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:23.361 [INFO][4510] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:23.851 [INFO][4510] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:24.201 [INFO][4510] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:24.313 [INFO][4510] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:25.435 [INFO][4510] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7 Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:26.451 [INFO][4510] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:26.951 [INFO][4510] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:26.952 [INFO][4510] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" host="localhost" Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:26.953 [INFO][4510] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:28.111995 containerd[1458]: 2026-04-14 13:33:26.953 [INFO][4510] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" HandleID="k8s-pod-network.00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.118387 containerd[1458]: 2026-04-14 13:33:26.983 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbfq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e0f11f62-5546-4397-955f-97b1110f25d7", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dbfq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b420826253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:28.118387 containerd[1458]: 2026-04-14 13:33:27.005 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.118387 containerd[1458]: 2026-04-14 13:33:27.011 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b420826253 ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.118387 containerd[1458]: 2026-04-14 13:33:27.173 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.118387 containerd[1458]: 2026-04-14 13:33:27.177 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbfq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e0f11f62-5546-4397-955f-97b1110f25d7", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7", Pod:"csi-node-driver-dbfq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b420826253", MAC:"ca:6c:d4:30:3f:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:28.118387 containerd[1458]: 2026-04-14 13:33:27.996 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7" Namespace="calico-system" Pod="csi-node-driver-dbfq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:33:28.283040 containerd[1458]: time="2026-04-14T13:33:28.281570615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:28.287340 containerd[1458]: time="2026-04-14T13:33:28.286442255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:28.289629 containerd[1458]: time="2026-04-14T13:33:28.287842762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:28.297220 containerd[1458]: time="2026-04-14T13:33:28.294861618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:28.440516 systemd-networkd[1385]: cali2b420826253: Gained IPv6LL Apr 14 13:33:28.494085 systemd[1]: Started cri-containerd-00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7.scope - libcontainer container 00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7. Apr 14 13:33:28.637569 containerd[1458]: time="2026-04-14T13:33:28.637517281Z" level=info msg="StartContainer for \"65ba849391b3398c29fb5163247f11e6b0f0e2254fdef49f7abed86124f5150b\" returns successfully" Apr 14 13:33:28.768891 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:20.089 [INFO][4622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:20.090 [INFO][4622] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" iface="eth0" netns="/var/run/netns/cni-c4617baa-6c69-ab40-7dd4-e6a9d91820ee" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:20.096 [INFO][4622] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" iface="eth0" netns="/var/run/netns/cni-c4617baa-6c69-ab40-7dd4-e6a9d91820ee" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:20.115 [INFO][4622] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" iface="eth0" netns="/var/run/netns/cni-c4617baa-6c69-ab40-7dd4-e6a9d91820ee" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:20.115 [INFO][4622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:20.115 [INFO][4622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:23.203 [INFO][4697] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:23.204 [INFO][4697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:27.541 [INFO][4697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:27.920 [WARNING][4697] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:27.922 [INFO][4697] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:28.915 [INFO][4697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:28.936072 containerd[1458]: 2026-04-14 13:33:28.918 [INFO][4622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:33:28.954442 containerd[1458]: time="2026-04-14T13:33:28.945388224Z" level=info msg="TearDown network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" successfully" Apr 14 13:33:28.954442 containerd[1458]: time="2026-04-14T13:33:28.945564908Z" level=info msg="StopPodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" returns successfully" Apr 14 13:33:28.948416 systemd[1]: run-netns-cni\x2dc4617baa\x2d6c69\x2dab40\x2d7dd4\x2de6a9d91820ee.mount: Deactivated successfully. Apr 14 13:33:29.055293 kubelet[2512]: E0414 13:33:29.053150 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:29.123406 containerd[1458]: time="2026-04-14T13:33:29.057841909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dswbh,Uid:373fc5da-85f1-463d-a6ba-0ede19c097b3,Namespace:kube-system,Attempt:1,}" Apr 14 13:33:29.196454 containerd[1458]: time="2026-04-14T13:33:29.196351233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbfq7,Uid:e0f11f62-5546-4397-955f-97b1110f25d7,Namespace:calico-system,Attempt:1,} returns sandbox id \"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7\"" Apr 14 13:33:29.429460 containerd[1458]: time="2026-04-14T13:33:29.429014380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 14 13:33:32.286827 systemd-networkd[1385]: cali5d2c6cf735b: Link UP Apr 14 13:33:32.296227 systemd-networkd[1385]: cali5d2c6cf735b: Gained carrier Apr 14 13:33:32.621072 kubelet[2512]: I0414 13:33:32.614553 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64998b8bc-8ffc5" podStartSLOduration=12.214848181 podStartE2EDuration="26.614520322s" podCreationTimestamp="2026-04-14 13:33:06 +0000 UTC" firstStartedPulling="2026-04-14 13:33:11.781734674 +0000 UTC m=+76.618067334" lastFinishedPulling="2026-04-14 13:33:26.181406807 +0000 UTC m=+91.017739475" observedRunningTime="2026-04-14 13:33:30.956636015 +0000 UTC m=+95.792968953" watchObservedRunningTime="2026-04-14 13:33:32.614520322 +0000 UTC m=+97.450852984" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:21.092 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0 calico-apiserver-677c4b66cd- calico-system 7a48f0b0-2c86-41cd-b28b-7d4223f81409 1100 0 2026-04-14 13:32:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677c4b66cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-677c4b66cd-p7zmd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5d2c6cf735b [] [] }} ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:21.097 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:23.857 [INFO][4744] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" HandleID="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:24.295 [INFO][4744] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" HandleID="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b2ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-677c4b66cd-p7zmd", "timestamp":"2026-04-14 13:33:23.85704948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00024cc60)} Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:24.310 [INFO][4744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:28.916 [INFO][4744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:28.916 [INFO][4744] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:29.977 [INFO][4744] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:30.729 [INFO][4744] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:31.359 [INFO][4744] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:31.490 [INFO][4744] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:31.747 [INFO][4744] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:31.747 [INFO][4744] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:31.883 [INFO][4744] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:32.010 [INFO][4744] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:32.190 [INFO][4744] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:32.193 [INFO][4744] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" host="localhost" Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:32.193 [INFO][4744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:32.633727 containerd[1458]: 2026-04-14 13:33:32.193 [INFO][4744] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" HandleID="k8s-pod-network.47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.650685 containerd[1458]: 2026-04-14 13:33:32.220 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"7a48f0b0-2c86-41cd-b28b-7d4223f81409", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-677c4b66cd-p7zmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5d2c6cf735b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:32.650685 containerd[1458]: 2026-04-14 13:33:32.220 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.650685 containerd[1458]: 2026-04-14 13:33:32.220 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d2c6cf735b ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.650685 containerd[1458]: 2026-04-14 13:33:32.299 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.650685 containerd[1458]: 2026-04-14 13:33:32.353 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"7a48f0b0-2c86-41cd-b28b-7d4223f81409", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe", Pod:"calico-apiserver-677c4b66cd-p7zmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5d2c6cf735b", MAC:"5a:69:1e:51:34:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:32.650685 containerd[1458]: 2026-04-14 13:33:32.601 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-p7zmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:33:32.949234 containerd[1458]: time="2026-04-14T13:33:32.946624010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:32.949234 containerd[1458]: time="2026-04-14T13:33:32.946703911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:32.949234 containerd[1458]: time="2026-04-14T13:33:32.946725593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:32.949234 containerd[1458]: time="2026-04-14T13:33:32.946872952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:33.154237 systemd[1]: Started cri-containerd-47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe.scope - libcontainer container 47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe. Apr 14 13:33:33.461429 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:33.549177 containerd[1458]: time="2026-04-14T13:33:33.549085898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:33.553946 containerd[1458]: time="2026-04-14T13:33:33.553793072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 14 13:33:33.572222 containerd[1458]: time="2026-04-14T13:33:33.571812136Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:33.593357 containerd[1458]: time="2026-04-14T13:33:33.592957001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:33.634701 containerd[1458]: time="2026-04-14T13:33:33.634555195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 4.205315693s" Apr 14 13:33:33.634701 containerd[1458]: time="2026-04-14T13:33:33.634595439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 14 13:33:33.681375 containerd[1458]: time="2026-04-14T13:33:33.678402676Z" level=info msg="CreateContainer within sandbox \"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 14 13:33:33.857053 systemd-networkd[1385]: cali4ccfa2af1f6: Link UP Apr 14 13:33:33.863542 systemd-networkd[1385]: cali4ccfa2af1f6: Gained carrier Apr 14 13:33:34.058358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710033487.mount: Deactivated successfully. Apr 14 13:33:34.078014 containerd[1458]: time="2026-04-14T13:33:34.074873792Z" level=info msg="CreateContainer within sandbox \"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7cdfd032dedd54bca652af9828a62f5ee6eb8b5881e50318d06fa6f28a2c61fc\"" Apr 14 13:33:34.101105 containerd[1458]: time="2026-04-14T13:33:34.100658441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-p7zmd,Uid:7a48f0b0-2c86-41cd-b28b-7d4223f81409,Namespace:calico-system,Attempt:1,} returns sandbox id \"47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe\"" Apr 14 13:33:34.145264 containerd[1458]: time="2026-04-14T13:33:34.135091774Z" level=info msg="StartContainer for \"7cdfd032dedd54bca652af9828a62f5ee6eb8b5881e50318d06fa6f28a2c61fc\"" Apr 14 13:33:34.135682 systemd-networkd[1385]: cali5d2c6cf735b: Gained IPv6LL Apr 14 13:33:34.256012 containerd[1458]: time="2026-04-14T13:33:34.255797113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:22.997 [INFO][4729] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0 goldmane-cccfbd5cf- calico-system c593b64b-dfef-4876-b0f3-e403e442c5f4 1099 0 2026-04-14 13:32:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-w9cxz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4ccfa2af1f6 [] [] }} ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:23.035 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:24.981 [INFO][4806] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" HandleID="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:25.590 [INFO][4806] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" HandleID="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036ae00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-w9cxz", "timestamp":"2026-04-14 13:33:24.981670662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004fadc0)} Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:25.590 [INFO][4806] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:32.197 [INFO][4806] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:32.200 [INFO][4806] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:32.352 [INFO][4806] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:32.595 [INFO][4806] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:32.798 [INFO][4806] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:32.965 [INFO][4806] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.193 [INFO][4806] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.199 [INFO][4806] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.347 [INFO][4806] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.396 [INFO][4806] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.668 [INFO][4806] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.711 [INFO][4806] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" host="localhost" Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.720 [INFO][4806] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:34.318261 containerd[1458]: 2026-04-14 13:33:33.723 [INFO][4806] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" HandleID="k8s-pod-network.7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.320788 containerd[1458]: 2026-04-14 13:33:33.745 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c593b64b-dfef-4876-b0f3-e403e442c5f4", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-w9cxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ccfa2af1f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:34.320788 containerd[1458]: 2026-04-14 13:33:33.756 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.320788 containerd[1458]: 2026-04-14 13:33:33.756 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ccfa2af1f6 ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.320788 containerd[1458]: 2026-04-14 13:33:33.919 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.320788 containerd[1458]: 2026-04-14 13:33:33.939 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c593b64b-dfef-4876-b0f3-e403e442c5f4", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc", Pod:"goldmane-cccfbd5cf-w9cxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ccfa2af1f6", MAC:"a2:30:da:81:5d:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:34.320788 containerd[1458]: 2026-04-14 13:33:34.297 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9cxz" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:33:34.444155 systemd[1]: Started cri-containerd-7cdfd032dedd54bca652af9828a62f5ee6eb8b5881e50318d06fa6f28a2c61fc.scope - libcontainer container 7cdfd032dedd54bca652af9828a62f5ee6eb8b5881e50318d06fa6f28a2c61fc. Apr 14 13:33:34.528147 containerd[1458]: time="2026-04-14T13:33:34.527980770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:34.528147 containerd[1458]: time="2026-04-14T13:33:34.528158263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:34.528439 containerd[1458]: time="2026-04-14T13:33:34.528177514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:34.528439 containerd[1458]: time="2026-04-14T13:33:34.528366817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:34.660329 systemd[1]: Started cri-containerd-7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc.scope - libcontainer container 7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc. Apr 14 13:33:34.882366 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:34.889586 containerd[1458]: time="2026-04-14T13:33:34.886846609Z" level=info msg="StartContainer for \"7cdfd032dedd54bca652af9828a62f5ee6eb8b5881e50318d06fa6f28a2c61fc\" returns successfully" Apr 14 13:33:35.090968 containerd[1458]: time="2026-04-14T13:33:35.090700961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9cxz,Uid:c593b64b-dfef-4876-b0f3-e403e442c5f4,Namespace:calico-system,Attempt:1,} returns sandbox id \"7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc\"" Apr 14 13:33:35.381989 systemd-networkd[1385]: cali3c257c97333: Link UP Apr 14 13:33:35.483214 systemd-networkd[1385]: cali3c257c97333: Gained carrier Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:24.153 [INFO][4768] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0 calico-kube-controllers-fd6dc49cc- calico-system 4783cd1c-b4fb-4d25-b891-52cfc6659501 1102 0 2026-04-14 13:32:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fd6dc49cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-fd6dc49cc-7555d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3c257c97333 [] [] }} ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:24.155 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:25.879 [INFO][4819] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" HandleID="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:26.473 [INFO][4819] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" HandleID="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048fd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-fd6dc49cc-7555d", "timestamp":"2026-04-14 13:33:25.879840854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00023a6e0)} Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:26.473 [INFO][4819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:33.721 [INFO][4819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:33.721 [INFO][4819] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:34.191 [INFO][4819] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:34.407 [INFO][4819] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:34.724 [INFO][4819] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:34.886 [INFO][4819] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:34.983 [INFO][4819] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:34.988 [INFO][4819] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:35.093 [INFO][4819] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1 Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:35.207 [INFO][4819] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:35.291 [INFO][4819] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:35.299 [INFO][4819] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" host="localhost" Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:35.336 [INFO][4819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:35.687505 containerd[1458]: 2026-04-14 13:33:35.336 [INFO][4819] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" HandleID="k8s-pod-network.c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.688429 containerd[1458]: 2026-04-14 13:33:35.350 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0", GenerateName:"calico-kube-controllers-fd6dc49cc-", Namespace:"calico-system", SelfLink:"", UID:"4783cd1c-b4fb-4d25-b891-52cfc6659501", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6dc49cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-fd6dc49cc-7555d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c257c97333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:35.688429 containerd[1458]: 2026-04-14 13:33:35.350 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.688429 containerd[1458]: 2026-04-14 13:33:35.351 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c257c97333 ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.688429 containerd[1458]: 2026-04-14 13:33:35.483 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.688429 containerd[1458]: 2026-04-14 13:33:35.487 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0", GenerateName:"calico-kube-controllers-fd6dc49cc-", Namespace:"calico-system", SelfLink:"", UID:"4783cd1c-b4fb-4d25-b891-52cfc6659501", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6dc49cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1", Pod:"calico-kube-controllers-fd6dc49cc-7555d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c257c97333", MAC:"b2:8a:96:e7:21:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:35.688429 containerd[1458]: 2026-04-14 13:33:35.658 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1" Namespace="calico-system" Pod="calico-kube-controllers-fd6dc49cc-7555d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:35.799594 systemd-networkd[1385]: cali4ccfa2af1f6: Gained IPv6LL Apr 14 13:33:35.874972 containerd[1458]: time="2026-04-14T13:33:35.872367850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:35.874972 containerd[1458]: time="2026-04-14T13:33:35.872528560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:35.874972 containerd[1458]: time="2026-04-14T13:33:35.872544992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:35.874972 containerd[1458]: time="2026-04-14T13:33:35.872668849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:36.001393 systemd[1]: Started cri-containerd-c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1.scope - libcontainer container c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1. Apr 14 13:33:36.290001 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:36.491056 containerd[1458]: time="2026-04-14T13:33:36.490745508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6dc49cc-7555d,Uid:4783cd1c-b4fb-4d25-b891-52cfc6659501,Namespace:calico-system,Attempt:1,} returns sandbox id \"c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1\"" Apr 14 13:33:36.824653 systemd-networkd[1385]: cali3c257c97333: Gained IPv6LL Apr 14 13:33:37.030226 systemd-networkd[1385]: cali3e04c3e05b8: Link UP Apr 14 13:33:37.034024 systemd-networkd[1385]: cali3e04c3e05b8: Gained carrier Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:30.883 [INFO][4964] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dswbh-eth0 coredns-66bc5c9577- kube-system 373fc5da-85f1-463d-a6ba-0ede19c097b3 1118 0 2026-04-14 13:32:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dswbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e04c3e05b8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:30.885 [INFO][4964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:32.173 [INFO][4987] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" HandleID="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:32.351 [INFO][4987] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" HandleID="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000370540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dswbh", "timestamp":"2026-04-14 13:33:32.173000103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00013c160)} Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:32.351 [INFO][4987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:35.337 [INFO][4987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:35.347 [INFO][4987] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:35.600 [INFO][4987] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:35.850 [INFO][4987] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.065 [INFO][4987] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.214 [INFO][4987] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.296 [INFO][4987] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.296 [INFO][4987] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.493 [INFO][4987] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370 Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.619 [INFO][4987] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.950 [INFO][4987] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.951 [INFO][4987] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" host="localhost" Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.951 [INFO][4987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:37.680169 containerd[1458]: 2026-04-14 13:33:36.951 [INFO][4987] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" HandleID="k8s-pod-network.9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.685180 containerd[1458]: 2026-04-14 13:33:36.975 [INFO][4964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dswbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"373fc5da-85f1-463d-a6ba-0ede19c097b3", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dswbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e04c3e05b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:37.685180 containerd[1458]: 2026-04-14 13:33:36.975 [INFO][4964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.685180 containerd[1458]: 2026-04-14 13:33:36.975 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e04c3e05b8 ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.685180 containerd[1458]: 2026-04-14 13:33:37.030 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.685180 containerd[1458]: 2026-04-14 13:33:37.043 [INFO][4964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dswbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"373fc5da-85f1-463d-a6ba-0ede19c097b3", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370", Pod:"coredns-66bc5c9577-dswbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e04c3e05b8", MAC:"da:3f:01:15:66:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:37.685480 containerd[1458]: 2026-04-14 13:33:37.668 [INFO][4964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370" Namespace="kube-system" Pod="coredns-66bc5c9577-dswbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:33:37.856023 containerd[1458]: time="2026-04-14T13:33:37.851763399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:37.856023 containerd[1458]: time="2026-04-14T13:33:37.851876924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:37.856023 containerd[1458]: time="2026-04-14T13:33:37.851895133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:37.856023 containerd[1458]: time="2026-04-14T13:33:37.852088600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:38.079504 systemd[1]: Started cri-containerd-9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370.scope - libcontainer container 9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370. Apr 14 13:33:38.236898 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:38.491824 systemd-networkd[1385]: cali3e04c3e05b8: Gained IPv6LL Apr 14 13:33:38.566050 containerd[1458]: time="2026-04-14T13:33:38.565378600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dswbh,Uid:373fc5da-85f1-463d-a6ba-0ede19c097b3,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370\"" Apr 14 13:33:38.573647 kubelet[2512]: E0414 13:33:38.573568 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:38.748516 containerd[1458]: time="2026-04-14T13:33:38.748287420Z" level=info msg="CreateContainer within sandbox \"9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 13:33:38.846263 containerd[1458]: time="2026-04-14T13:33:38.846063709Z" level=info msg="CreateContainer within sandbox \"9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79c64c501c1e3712ef530e0365a274c0dd811e5e57153d7ea9a616c1d392d932\"" Apr 14 13:33:38.850977 containerd[1458]: time="2026-04-14T13:33:38.849251754Z" level=info msg="StartContainer for \"79c64c501c1e3712ef530e0365a274c0dd811e5e57153d7ea9a616c1d392d932\"" Apr 14 13:33:39.177290 systemd[1]: Started cri-containerd-79c64c501c1e3712ef530e0365a274c0dd811e5e57153d7ea9a616c1d392d932.scope - libcontainer container 79c64c501c1e3712ef530e0365a274c0dd811e5e57153d7ea9a616c1d392d932. Apr 14 13:33:39.448567 containerd[1458]: time="2026-04-14T13:33:39.448393176Z" level=info msg="StartContainer for \"79c64c501c1e3712ef530e0365a274c0dd811e5e57153d7ea9a616c1d392d932\" returns successfully" Apr 14 13:33:39.532267 systemd-networkd[1385]: calida0ff3bc699: Link UP Apr 14 13:33:39.533548 systemd-networkd[1385]: calida0ff3bc699: Gained carrier Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:30.095 [INFO][4873] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0 calico-apiserver-677c4b66cd- calico-system 8dce0864-7c1c-4c82-8be6-3d53a4d967af 1110 0 2026-04-14 13:32:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677c4b66cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-677c4b66cd-bnqqz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calida0ff3bc699 [] [] }} ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:30.115 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:32.299 [INFO][4981] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" HandleID="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:32.442 [INFO][4981] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" HandleID="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138d50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-677c4b66cd-bnqqz", "timestamp":"2026-04-14 13:33:32.299849 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fa840)} Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:32.442 [INFO][4981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:36.951 [INFO][4981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:36.951 [INFO][4981] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:37.368 [INFO][4981] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:37.666 [INFO][4981] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:38.090 [INFO][4981] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:38.501 [INFO][4981] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:38.723 [INFO][4981] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:38.727 [INFO][4981] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:39.029 [INFO][4981] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:39.131 [INFO][4981] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:39.460 [INFO][4981] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:39.464 [INFO][4981] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" host="localhost" Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:39.464 [INFO][4981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:39.755357 containerd[1458]: 2026-04-14 13:33:39.464 [INFO][4981] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" HandleID="k8s-pod-network.38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.756498 containerd[1458]: 2026-04-14 13:33:39.474 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"8dce0864-7c1c-4c82-8be6-3d53a4d967af", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-677c4b66cd-bnqqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida0ff3bc699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:39.756498 containerd[1458]: 2026-04-14 13:33:39.488 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.756498 containerd[1458]: 2026-04-14 13:33:39.495 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida0ff3bc699 ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.756498 containerd[1458]: 2026-04-14 13:33:39.531 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.756498 containerd[1458]: 2026-04-14 13:33:39.557 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"8dce0864-7c1c-4c82-8be6-3d53a4d967af", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af", Pod:"calico-apiserver-677c4b66cd-bnqqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida0ff3bc699", MAC:"22:4c:e0:3b:c4:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:39.756498 containerd[1458]: 2026-04-14 13:33:39.743 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af" Namespace="calico-system" Pod="calico-apiserver-677c4b66cd-bnqqz" WorkloadEndpoint="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:33:39.948356 containerd[1458]: time="2026-04-14T13:33:39.948134513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:39.948356 containerd[1458]: time="2026-04-14T13:33:39.948254204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:39.948356 containerd[1458]: time="2026-04-14T13:33:39.948272413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:39.966289 containerd[1458]: time="2026-04-14T13:33:39.963155211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:40.229116 systemd[1]: Started cri-containerd-38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af.scope - libcontainer container 38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af. Apr 14 13:33:40.379975 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:40.600366 systemd-networkd[1385]: calida0ff3bc699: Gained IPv6LL Apr 14 13:33:40.614585 containerd[1458]: time="2026-04-14T13:33:40.610892648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677c4b66cd-bnqqz,Uid:8dce0864-7c1c-4c82-8be6-3d53a4d967af,Namespace:calico-system,Attempt:1,} returns sandbox id \"38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af\"" Apr 14 13:33:40.673402 kubelet[2512]: E0414 13:33:40.673155 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:41.361666 kubelet[2512]: I0414 13:33:41.352890 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dswbh" podStartSLOduration=101.352854651 podStartE2EDuration="1m41.352854651s" podCreationTimestamp="2026-04-14 13:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:33:41.247608267 +0000 UTC m=+106.083940942" watchObservedRunningTime="2026-04-14 13:33:41.352854651 +0000 UTC m=+106.189187323" Apr 14 13:33:41.783184 kubelet[2512]: E0414 13:33:41.783115 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:42.979848 kubelet[2512]: E0414 13:33:42.979384 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:43.267023 containerd[1458]: time="2026-04-14T13:33:43.264319130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:43.271396 containerd[1458]: time="2026-04-14T13:33:43.268053300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 14 13:33:43.271396 containerd[1458]: time="2026-04-14T13:33:43.271377166Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:43.281834 containerd[1458]: time="2026-04-14T13:33:43.281397902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:43.285135 containerd[1458]: time="2026-04-14T13:33:43.284449262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 9.028370983s" Apr 14 13:33:43.285135 containerd[1458]: time="2026-04-14T13:33:43.284749413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 13:33:43.298018 containerd[1458]: time="2026-04-14T13:33:43.297356502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 14 13:33:43.342008 containerd[1458]: time="2026-04-14T13:33:43.338820094Z" level=info msg="CreateContainer within sandbox \"47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 13:33:43.395567 containerd[1458]: time="2026-04-14T13:33:43.395179782Z" level=info msg="CreateContainer within sandbox \"47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e5e5e893d5b3647aa71934f9831f256ca05a27d602c89ee1e2b241e6801a681a\"" Apr 14 13:33:43.464571 containerd[1458]: time="2026-04-14T13:33:43.464388228Z" level=info msg="StartContainer for \"e5e5e893d5b3647aa71934f9831f256ca05a27d602c89ee1e2b241e6801a681a\"" Apr 14 13:33:43.728632 systemd[1]: Started cri-containerd-e5e5e893d5b3647aa71934f9831f256ca05a27d602c89ee1e2b241e6801a681a.scope - libcontainer container e5e5e893d5b3647aa71934f9831f256ca05a27d602c89ee1e2b241e6801a681a. Apr 14 13:33:44.150348 containerd[1458]: time="2026-04-14T13:33:44.150157686Z" level=info msg="StartContainer for \"e5e5e893d5b3647aa71934f9831f256ca05a27d602c89ee1e2b241e6801a681a\" returns successfully" Apr 14 13:33:45.424535 kubelet[2512]: I0414 13:33:45.424231 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-677c4b66cd-p7zmd" podStartSLOduration=77.383951702 podStartE2EDuration="1m26.424162538s" podCreationTimestamp="2026-04-14 13:32:19 +0000 UTC" firstStartedPulling="2026-04-14 13:33:34.255538438 +0000 UTC m=+99.091871100" lastFinishedPulling="2026-04-14 13:33:43.295749274 +0000 UTC m=+108.132081936" observedRunningTime="2026-04-14 13:33:45.418640755 +0000 UTC m=+110.254973431" watchObservedRunningTime="2026-04-14 13:33:45.424162538 +0000 UTC m=+110.260495207" Apr 14 13:33:47.246227 containerd[1458]: time="2026-04-14T13:33:47.244045742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:47.246227 containerd[1458]: time="2026-04-14T13:33:47.246142813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 14 13:33:47.249186 containerd[1458]: time="2026-04-14T13:33:47.249126488Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:47.276176 containerd[1458]: time="2026-04-14T13:33:47.275292475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:47.326806 containerd[1458]: time="2026-04-14T13:33:47.326693335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 4.029298954s" Apr 14 13:33:47.326806 containerd[1458]: time="2026-04-14T13:33:47.326734958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 14 13:33:47.335197 containerd[1458]: time="2026-04-14T13:33:47.335020100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 14 13:33:47.359507 containerd[1458]: time="2026-04-14T13:33:47.356306666Z" level=info msg="CreateContainer within sandbox \"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 14 13:33:47.465852 containerd[1458]: time="2026-04-14T13:33:47.464360519Z" level=info msg="CreateContainer within sandbox \"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a3fec7f570146cbae46ffe8c74de26ea807c5fc7cde72b5d8aa571f6cfc36a27\"" Apr 14 13:33:47.530099 containerd[1458]: time="2026-04-14T13:33:47.524259114Z" level=info msg="StartContainer for \"a3fec7f570146cbae46ffe8c74de26ea807c5fc7cde72b5d8aa571f6cfc36a27\"" Apr 14 13:33:47.787480 systemd[1]: Started cri-containerd-a3fec7f570146cbae46ffe8c74de26ea807c5fc7cde72b5d8aa571f6cfc36a27.scope - libcontainer container a3fec7f570146cbae46ffe8c74de26ea807c5fc7cde72b5d8aa571f6cfc36a27. Apr 14 13:33:48.158247 containerd[1458]: time="2026-04-14T13:33:48.157787202Z" level=info msg="StartContainer for \"a3fec7f570146cbae46ffe8c74de26ea807c5fc7cde72b5d8aa571f6cfc36a27\" returns successfully" Apr 14 13:33:48.595657 kubelet[2512]: I0414 13:33:48.592619 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dbfq7" podStartSLOduration=65.618151269 podStartE2EDuration="1m23.592573762s" podCreationTimestamp="2026-04-14 13:32:25 +0000 UTC" firstStartedPulling="2026-04-14 13:33:29.357636518 +0000 UTC m=+94.193969178" lastFinishedPulling="2026-04-14 13:33:47.332059011 +0000 UTC m=+112.168391671" observedRunningTime="2026-04-14 13:33:48.588560959 +0000 UTC m=+113.424893623" watchObservedRunningTime="2026-04-14 13:33:48.592573762 +0000 UTC m=+113.428906422" Apr 14 13:33:48.948865 kubelet[2512]: I0414 13:33:48.947366 2512 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 14 13:33:48.964120 kubelet[2512]: I0414 13:33:48.962055 2512 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 14 13:33:53.638755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533815150.mount: Deactivated successfully. Apr 14 13:33:54.585723 kubelet[2512]: E0414 13:33:54.585653 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:55.656053 containerd[1458]: time="2026-04-14T13:33:55.655897296Z" level=info msg="StopPodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\"" Apr 14 13:33:55.764653 containerd[1458]: time="2026-04-14T13:33:55.764566850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:55.771585 containerd[1458]: time="2026-04-14T13:33:55.771467542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 14 13:33:55.789123 containerd[1458]: time="2026-04-14T13:33:55.786771139Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:55.800156 containerd[1458]: time="2026-04-14T13:33:55.799803489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:55.803289 containerd[1458]: time="2026-04-14T13:33:55.802990433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 8.467886937s" Apr 14 13:33:55.803289 containerd[1458]: time="2026-04-14T13:33:55.803268045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 14 13:33:55.845138 containerd[1458]: time="2026-04-14T13:33:55.844975703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 14 13:33:55.874270 containerd[1458]: time="2026-04-14T13:33:55.872288901Z" level=info msg="CreateContainer within sandbox \"7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 14 13:33:56.047354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787165154.mount: Deactivated successfully. Apr 14 13:33:56.048788 containerd[1458]: time="2026-04-14T13:33:56.048366233Z" level=info msg="CreateContainer within sandbox \"7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33\"" Apr 14 13:33:56.055270 containerd[1458]: time="2026-04-14T13:33:56.051620626Z" level=info msg="StartContainer for \"40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33\"" Apr 14 13:33:56.287484 systemd[1]: Started cri-containerd-40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33.scope - libcontainer container 40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33. Apr 14 13:33:56.708706 containerd[1458]: time="2026-04-14T13:33:56.708538413Z" level=info msg="StartContainer for \"40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33\" returns successfully" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.282 [WARNING][5589] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g5q72-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc203ea-65cb-4880-91f1-00f13ee08f83", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043", Pod:"coredns-66bc5c9577-g5q72", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d993e4c2ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.284 [INFO][5589] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.284 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" iface="eth0" netns="" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.284 [INFO][5589] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.284 [INFO][5589] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.674 [INFO][5615] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.680 [INFO][5615] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.680 [INFO][5615] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.741 [WARNING][5615] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.742 [INFO][5615] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.762 [INFO][5615] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:56.782032 containerd[1458]: 2026-04-14 13:33:56.773 [INFO][5589] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:56.784884 containerd[1458]: time="2026-04-14T13:33:56.784196981Z" level=info msg="TearDown network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" successfully" Apr 14 13:33:56.784884 containerd[1458]: time="2026-04-14T13:33:56.784308420Z" level=info msg="StopPodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" returns successfully" Apr 14 13:33:56.946520 containerd[1458]: time="2026-04-14T13:33:56.945537373Z" level=info msg="RemovePodSandbox for \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\"" Apr 14 13:33:56.949933 containerd[1458]: time="2026-04-14T13:33:56.949794945Z" level=info msg="Forcibly stopping sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\"" Apr 14 13:33:57.175157 kubelet[2512]: I0414 13:33:57.174399 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-w9cxz" podStartSLOduration=77.49895426 podStartE2EDuration="1m38.174372373s" podCreationTimestamp="2026-04-14 13:32:19 +0000 UTC" firstStartedPulling="2026-04-14 13:33:35.164842855 +0000 UTC m=+100.001175515" lastFinishedPulling="2026-04-14 13:33:55.84026096 +0000 UTC m=+120.676593628" observedRunningTime="2026-04-14 13:33:57.166635242 +0000 UTC m=+122.002967919" watchObservedRunningTime="2026-04-14 13:33:57.174372373 +0000 UTC m=+122.010705033" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.298 [WARNING][5659] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g5q72-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbc203ea-65cb-4880-91f1-00f13ee08f83", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28273c80364bc4217086b070785df05e9275efe491d6561a4f6f703c69b3c043", Pod:"coredns-66bc5c9577-g5q72", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d993e4c2ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.345 [INFO][5659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.346 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" iface="eth0" netns="" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.346 [INFO][5659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.346 [INFO][5659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.496 [INFO][5686] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.500 [INFO][5686] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.572 [INFO][5686] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.695 [WARNING][5686] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.695 [INFO][5686] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" HandleID="k8s-pod-network.485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Workload="localhost-k8s-coredns--66bc5c9577--g5q72-eth0" Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.865 [INFO][5686] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:57.893221 containerd[1458]: 2026-04-14 13:33:57.877 [INFO][5659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4" Apr 14 13:33:57.947768 containerd[1458]: time="2026-04-14T13:33:57.898704878Z" level=info msg="TearDown network for sandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" successfully" Apr 14 13:33:57.988073 containerd[1458]: time="2026-04-14T13:33:57.987779118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:57.988703 containerd[1458]: time="2026-04-14T13:33:57.988126120Z" level=info msg="RemovePodSandbox \"485c31bc7cee0228b09bbaefe1f748b2177c380c7b105a3a08adad39976bf0e4\" returns successfully" Apr 14 13:33:58.038166 containerd[1458]: time="2026-04-14T13:33:58.037637000Z" level=info msg="StopPodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\"" Apr 14 13:33:58.089348 systemd[1]: run-containerd-runc-k8s.io-40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33-runc.3XoUIM.mount: Deactivated successfully. Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.293 [WARNING][5725] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0", GenerateName:"calico-kube-controllers-fd6dc49cc-", Namespace:"calico-system", SelfLink:"", UID:"4783cd1c-b4fb-4d25-b891-52cfc6659501", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6dc49cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1", Pod:"calico-kube-controllers-fd6dc49cc-7555d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c257c97333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.294 [INFO][5725] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.294 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" iface="eth0" netns="" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.297 [INFO][5725] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.297 [INFO][5725] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.815 [INFO][5738] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.819 [INFO][5738] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.821 [INFO][5738] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.943 [WARNING][5738] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.943 [INFO][5738] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.973 [INFO][5738] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:58.990279 containerd[1458]: 2026-04-14 13:33:58.987 [INFO][5725] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:58.990279 containerd[1458]: time="2026-04-14T13:33:58.990190132Z" level=info msg="TearDown network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" successfully" Apr 14 13:33:58.990279 containerd[1458]: time="2026-04-14T13:33:58.990217080Z" level=info msg="StopPodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" returns successfully" Apr 14 13:33:58.991470 containerd[1458]: time="2026-04-14T13:33:58.991042496Z" level=info msg="RemovePodSandbox for \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\"" Apr 14 13:33:58.991470 containerd[1458]: time="2026-04-14T13:33:58.991067456Z" level=info msg="Forcibly stopping sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\"" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.228 [WARNING][5757] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0", GenerateName:"calico-kube-controllers-fd6dc49cc-", Namespace:"calico-system", SelfLink:"", UID:"4783cd1c-b4fb-4d25-b891-52cfc6659501", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6dc49cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1", Pod:"calico-kube-controllers-fd6dc49cc-7555d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c257c97333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.233 [INFO][5757] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.233 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" iface="eth0" netns="" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.233 [INFO][5757] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.233 [INFO][5757] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.443 [INFO][5765] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.448 [INFO][5765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.448 [INFO][5765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.497 [WARNING][5765] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.498 [INFO][5765] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" HandleID="k8s-pod-network.9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Workload="localhost-k8s-calico--kube--controllers--fd6dc49cc--7555d-eth0" Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.532 [INFO][5765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:59.543106 containerd[1458]: 2026-04-14 13:33:59.534 [INFO][5757] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c" Apr 14 13:33:59.555336 containerd[1458]: time="2026-04-14T13:33:59.544497891Z" level=info msg="TearDown network for sandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" successfully" Apr 14 13:33:59.643601 containerd[1458]: time="2026-04-14T13:33:59.643392779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:59.646208 containerd[1458]: time="2026-04-14T13:33:59.645608185Z" level=info msg="RemovePodSandbox \"9cb838a4ce3b7e3ce73a17f897b9b6a1b8af81bb3c09b500b885ceb5af89f71c\" returns successfully" Apr 14 13:33:59.651712 containerd[1458]: time="2026-04-14T13:33:59.649669350Z" level=info msg="StopPodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\"" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.105 [WARNING][5783] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"8dce0864-7c1c-4c82-8be6-3d53a4d967af", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af", Pod:"calico-apiserver-677c4b66cd-bnqqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida0ff3bc699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.107 [INFO][5783] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.107 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" iface="eth0" netns="" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.107 [INFO][5783] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.107 [INFO][5783] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.457 [INFO][5814] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.464 [INFO][5814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.489 [INFO][5814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.785 [WARNING][5814] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.785 [INFO][5814] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.879 [INFO][5814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:00.955761 containerd[1458]: 2026-04-14 13:34:00.887 [INFO][5783] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:00.957621 containerd[1458]: time="2026-04-14T13:34:00.956232029Z" level=info msg="TearDown network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" successfully" Apr 14 13:34:00.957621 containerd[1458]: time="2026-04-14T13:34:00.956309534Z" level=info msg="StopPodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" returns successfully" Apr 14 13:34:00.989062 containerd[1458]: time="2026-04-14T13:34:00.988399851Z" level=info msg="RemovePodSandbox for \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\"" Apr 14 13:34:00.989062 containerd[1458]: time="2026-04-14T13:34:00.988626819Z" level=info msg="Forcibly stopping sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\"" Apr 14 13:34:01.843499 systemd[1]: run-containerd-runc-k8s.io-9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69-runc.4DBGJ9.mount: Deactivated successfully. Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.473 [WARNING][5832] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"8dce0864-7c1c-4c82-8be6-3d53a4d967af", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af", Pod:"calico-apiserver-677c4b66cd-bnqqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calida0ff3bc699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.478 [INFO][5832] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.480 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" iface="eth0" netns="" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.480 [INFO][5832] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.480 [INFO][5832] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.690 [INFO][5851] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.694 [INFO][5851] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.701 [INFO][5851] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.789 [WARNING][5851] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.789 [INFO][5851] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" HandleID="k8s-pod-network.2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Workload="localhost-k8s-calico--apiserver--677c4b66cd--bnqqz-eth0" Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.889 [INFO][5851] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:01.906533 containerd[1458]: 2026-04-14 13:34:01.896 [INFO][5832] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b" Apr 14 13:34:01.907572 containerd[1458]: time="2026-04-14T13:34:01.906582028Z" level=info msg="TearDown network for sandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" successfully" Apr 14 13:34:01.934065 containerd[1458]: time="2026-04-14T13:34:01.933835191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:01.935094 containerd[1458]: time="2026-04-14T13:34:01.934989931Z" level=info msg="RemovePodSandbox \"2c25a5622428a4e82d255e243adb8f44651fe16bd274f1c67fc9b3fc8c686f7b\" returns successfully" Apr 14 13:34:01.937984 containerd[1458]: time="2026-04-14T13:34:01.937818106Z" level=info msg="StopPodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\"" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.380 [WARNING][5889] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dswbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"373fc5da-85f1-463d-a6ba-0ede19c097b3", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370", Pod:"coredns-66bc5c9577-dswbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e04c3e05b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.384 [INFO][5889] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.384 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" iface="eth0" netns="" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.384 [INFO][5889] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.384 [INFO][5889] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.965 [INFO][5899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.965 [INFO][5899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:02.965 [INFO][5899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:03.107 [WARNING][5899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:03.108 [INFO][5899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:03.117 [INFO][5899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:03.123771 containerd[1458]: 2026-04-14 13:34:03.119 [INFO][5889] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:03.124371 containerd[1458]: time="2026-04-14T13:34:03.123829650Z" level=info msg="TearDown network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" successfully" Apr 14 13:34:03.124371 containerd[1458]: time="2026-04-14T13:34:03.123860436Z" level=info msg="StopPodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" returns successfully" Apr 14 13:34:03.127195 containerd[1458]: time="2026-04-14T13:34:03.126612283Z" level=info msg="RemovePodSandbox for \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\"" Apr 14 13:34:03.127574 containerd[1458]: time="2026-04-14T13:34:03.127467015Z" level=info msg="Forcibly stopping sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\"" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.280 [WARNING][5917] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dswbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"373fc5da-85f1-463d-a6ba-0ede19c097b3", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f36ffb61fdaf75f173cc07f14b1c0d88218ed3a3683858b3dc72ae52d9db370", Pod:"coredns-66bc5c9577-dswbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e04c3e05b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.280 [INFO][5917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.280 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" iface="eth0" netns="" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.280 [INFO][5917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.280 [INFO][5917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.662 [INFO][5926] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.662 [INFO][5926] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:03.662 [INFO][5926] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:04.106 [WARNING][5926] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:04.113 [INFO][5926] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" HandleID="k8s-pod-network.a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Workload="localhost-k8s-coredns--66bc5c9577--dswbh-eth0" Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:04.138 [INFO][5926] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:04.176328 containerd[1458]: 2026-04-14 13:34:04.160 [INFO][5917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf" Apr 14 13:34:04.180462 containerd[1458]: time="2026-04-14T13:34:04.176509546Z" level=info msg="TearDown network for sandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" successfully" Apr 14 13:34:04.249553 containerd[1458]: time="2026-04-14T13:34:04.249365416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:04.250450 containerd[1458]: time="2026-04-14T13:34:04.249614056Z" level=info msg="RemovePodSandbox \"a86700c1c3d6066ee6cc04943c3295744510fbfe20f05bf590133ffa9e4f66bf\" returns successfully" Apr 14 13:34:04.270188 containerd[1458]: time="2026-04-14T13:34:04.268862344Z" level=info msg="StopPodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\"" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:04.802 [WARNING][5945] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbfq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e0f11f62-5546-4397-955f-97b1110f25d7", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7", Pod:"csi-node-driver-dbfq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b420826253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:04.803 [INFO][5945] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:04.803 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" iface="eth0" netns="" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:04.803 [INFO][5945] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:04.803 [INFO][5945] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.165 [INFO][5953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.197 [INFO][5953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.197 [INFO][5953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.253 [WARNING][5953] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.253 [INFO][5953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.292 [INFO][5953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:05.358025 containerd[1458]: 2026-04-14 13:34:05.342 [INFO][5945] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:05.358025 containerd[1458]: time="2026-04-14T13:34:05.357413526Z" level=info msg="TearDown network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" successfully" Apr 14 13:34:05.358025 containerd[1458]: time="2026-04-14T13:34:05.357625389Z" level=info msg="StopPodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" returns successfully" Apr 14 13:34:05.380780 containerd[1458]: time="2026-04-14T13:34:05.380698129Z" level=info msg="RemovePodSandbox for \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\"" Apr 14 13:34:05.380780 containerd[1458]: time="2026-04-14T13:34:05.380797248Z" level=info msg="Forcibly stopping sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\"" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:05.880 [WARNING][5970] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dbfq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e0f11f62-5546-4397-955f-97b1110f25d7", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00f2cde80b4ea7a43f1f961afee1cf8325f759ec4e3793ea281fe4172d3451a7", Pod:"csi-node-driver-dbfq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b420826253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:05.882 [INFO][5970] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:05.882 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" iface="eth0" netns="" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:05.882 [INFO][5970] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:05.882 [INFO][5970] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.258 [INFO][5978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.260 [INFO][5978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.261 [INFO][5978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.337 [WARNING][5978] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.340 [INFO][5978] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" HandleID="k8s-pod-network.a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Workload="localhost-k8s-csi--node--driver--dbfq7-eth0" Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.388 [INFO][5978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:06.397505 containerd[1458]: 2026-04-14 13:34:06.392 [INFO][5970] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1" Apr 14 13:34:06.401087 containerd[1458]: time="2026-04-14T13:34:06.398218607Z" level=info msg="TearDown network for sandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" successfully" Apr 14 13:34:06.467736 containerd[1458]: time="2026-04-14T13:34:06.467547265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:06.468591 containerd[1458]: time="2026-04-14T13:34:06.468514728Z" level=info msg="RemovePodSandbox \"a191d72d660c451e73f55657692f9ef10d5cdfd74230999c4bd8238cb530e3c1\" returns successfully" Apr 14 13:34:06.470799 containerd[1458]: time="2026-04-14T13:34:06.470775197Z" level=info msg="StopPodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\"" Apr 14 13:34:06.965400 containerd[1458]: time="2026-04-14T13:34:06.965270618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:34:06.966753 containerd[1458]: time="2026-04-14T13:34:06.966538052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 14 13:34:06.975883 containerd[1458]: time="2026-04-14T13:34:06.972274451Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:34:07.086972 containerd[1458]: time="2026-04-14T13:34:07.086850879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:34:07.088961 containerd[1458]: time="2026-04-14T13:34:07.088859933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 11.243841115s" Apr 14 13:34:07.090225 containerd[1458]: time="2026-04-14T13:34:07.089692498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 14 13:34:07.117470 containerd[1458]: time="2026-04-14T13:34:07.112941424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 13:34:07.171420 containerd[1458]: time="2026-04-14T13:34:07.171201869Z" level=info msg="CreateContainer within sandbox \"c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 14 13:34:07.261516 containerd[1458]: time="2026-04-14T13:34:07.261148496Z" level=info msg="CreateContainer within sandbox \"c62e579fadd9df19dd1a37298eb9dabc3df7fcfea988f0d056af7e204fadd9e1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"22a80e036de42cb696cb4d3b686685b69e09246abc367a182c45bcc4a00c874e\"" Apr 14 13:34:07.275592 containerd[1458]: time="2026-04-14T13:34:07.275347011Z" level=info msg="StartContainer for \"22a80e036de42cb696cb4d3b686685b69e09246abc367a182c45bcc4a00c874e\"" Apr 14 13:34:07.408230 systemd[1]: Started cri-containerd-22a80e036de42cb696cb4d3b686685b69e09246abc367a182c45bcc4a00c874e.scope - libcontainer container 22a80e036de42cb696cb4d3b686685b69e09246abc367a182c45bcc4a00c874e. Apr 14 13:34:07.584636 containerd[1458]: time="2026-04-14T13:34:07.584139751Z" level=info msg="StartContainer for \"22a80e036de42cb696cb4d3b686685b69e09246abc367a182c45bcc4a00c874e\" returns successfully" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:06.942 [WARNING][5995] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" WorkloadEndpoint="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:06.952 [INFO][5995] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:06.958 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" iface="eth0" netns="" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:06.958 [INFO][5995] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:06.960 [INFO][5995] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.389 [INFO][6004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.396 [INFO][6004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.401 [INFO][6004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.553 [WARNING][6004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.553 [INFO][6004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.576 [INFO][6004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:07.590741 containerd[1458]: 2026-04-14 13:34:07.584 [INFO][5995] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:07.593741 containerd[1458]: time="2026-04-14T13:34:07.590860257Z" level=info msg="TearDown network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" successfully" Apr 14 13:34:07.593741 containerd[1458]: time="2026-04-14T13:34:07.590891555Z" level=info msg="StopPodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" returns successfully" Apr 14 13:34:07.600534 containerd[1458]: time="2026-04-14T13:34:07.596151768Z" level=info msg="RemovePodSandbox for \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\"" Apr 14 13:34:07.600534 containerd[1458]: time="2026-04-14T13:34:07.596271855Z" level=info msg="Forcibly stopping sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\"" Apr 14 13:34:07.672604 containerd[1458]: time="2026-04-14T13:34:07.672022920Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:34:07.674737 containerd[1458]: time="2026-04-14T13:34:07.674567551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 14 13:34:07.679538 containerd[1458]: time="2026-04-14T13:34:07.679490344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 566.513409ms" Apr 14 13:34:07.679538 containerd[1458]: time="2026-04-14T13:34:07.679537031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 13:34:07.777349 containerd[1458]: time="2026-04-14T13:34:07.776402522Z" level=info msg="CreateContainer within sandbox \"38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 13:34:07.870721 containerd[1458]: time="2026-04-14T13:34:07.867498172Z" level=info msg="CreateContainer within sandbox \"38f3480c2ceb38fe364926639bf44610359b6f84fcca96a40c84c859e6a9b2af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a6a9b139bbf9f54b5560f3182c26db7967f0aa99ca66f9b613535faef63521c\"" Apr 14 13:34:07.892070 containerd[1458]: time="2026-04-14T13:34:07.886746258Z" level=info msg="StartContainer for \"4a6a9b139bbf9f54b5560f3182c26db7967f0aa99ca66f9b613535faef63521c\"" Apr 14 13:34:08.217783 systemd[1]: Started cri-containerd-4a6a9b139bbf9f54b5560f3182c26db7967f0aa99ca66f9b613535faef63521c.scope - libcontainer container 4a6a9b139bbf9f54b5560f3182c26db7967f0aa99ca66f9b613535faef63521c. Apr 14 13:34:08.970387 containerd[1458]: time="2026-04-14T13:34:08.970207172Z" level=info msg="StartContainer for \"4a6a9b139bbf9f54b5560f3182c26db7967f0aa99ca66f9b613535faef63521c\" returns successfully" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.199 [WARNING][6062] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" WorkloadEndpoint="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.200 [INFO][6062] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.200 [INFO][6062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" iface="eth0" netns="" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.200 [INFO][6062] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.200 [INFO][6062] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.549 [INFO][6103] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.551 [INFO][6103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.555 [INFO][6103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.935 [WARNING][6103] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:08.935 [INFO][6103] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" HandleID="k8s-pod-network.53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Workload="localhost-k8s-whisker--75dbfc9fc8--snl69-eth0" Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:09.020 [INFO][6103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:09.045574 containerd[1458]: 2026-04-14 13:34:09.032 [INFO][6062] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c" Apr 14 13:34:09.048315 containerd[1458]: time="2026-04-14T13:34:09.045803990Z" level=info msg="TearDown network for sandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" successfully" Apr 14 13:34:09.091999 containerd[1458]: time="2026-04-14T13:34:09.091836748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:09.091999 containerd[1458]: time="2026-04-14T13:34:09.091995424Z" level=info msg="RemovePodSandbox \"53042422fd0e5668ea747af6ef8fe5c5a787155ff7bfdb772a80ee1cc728ce3c\" returns successfully" Apr 14 13:34:09.100277 containerd[1458]: time="2026-04-14T13:34:09.099336442Z" level=info msg="StopPodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\"" Apr 14 13:34:10.130257 kubelet[2512]: I0414 13:34:10.116704 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fd6dc49cc-7555d" podStartSLOduration=74.548907966 podStartE2EDuration="1m45.109817834s" podCreationTimestamp="2026-04-14 13:32:25 +0000 UTC" firstStartedPulling="2026-04-14 13:33:36.548543893 +0000 UTC m=+101.384876565" lastFinishedPulling="2026-04-14 13:34:07.109453769 +0000 UTC m=+131.945786433" observedRunningTime="2026-04-14 13:34:09.294786401 +0000 UTC m=+134.131119061" watchObservedRunningTime="2026-04-14 13:34:10.109817834 +0000 UTC m=+134.946150502" Apr 14 13:34:10.135970 kubelet[2512]: I0414 13:34:10.135048 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-677c4b66cd-bnqqz" podStartSLOduration=84.013527403 podStartE2EDuration="1m51.135023459s" podCreationTimestamp="2026-04-14 13:32:19 +0000 UTC" firstStartedPulling="2026-04-14 13:33:40.634232468 +0000 UTC m=+105.470565139" lastFinishedPulling="2026-04-14 13:34:07.755728535 +0000 UTC m=+132.592061195" observedRunningTime="2026-04-14 13:34:10.084805665 +0000 UTC m=+134.921138339" watchObservedRunningTime="2026-04-14 13:34:10.135023459 +0000 UTC m=+134.971356127" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:09.667 [WARNING][6155] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c593b64b-dfef-4876-b0f3-e403e442c5f4", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc", Pod:"goldmane-cccfbd5cf-w9cxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ccfa2af1f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:09.689 [INFO][6155] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:09.695 [INFO][6155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" iface="eth0" netns="" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:09.700 [INFO][6155] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:09.741 [INFO][6155] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.298 [INFO][6171] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.331 [INFO][6171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.335 [INFO][6171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.467 [WARNING][6171] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.472 [INFO][6171] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.666 [INFO][6171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:10.698602 containerd[1458]: 2026-04-14 13:34:10.691 [INFO][6155] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:10.736704 containerd[1458]: time="2026-04-14T13:34:10.699054702Z" level=info msg="TearDown network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" successfully" Apr 14 13:34:10.736704 containerd[1458]: time="2026-04-14T13:34:10.699097101Z" level=info msg="StopPodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" returns successfully" Apr 14 13:34:10.736704 containerd[1458]: time="2026-04-14T13:34:10.735234482Z" level=info msg="RemovePodSandbox for \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\"" Apr 14 13:34:10.736704 containerd[1458]: time="2026-04-14T13:34:10.735455723Z" level=info msg="Forcibly stopping sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\"" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:11.371 [WARNING][6212] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c593b64b-dfef-4876-b0f3-e403e442c5f4", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7df7dbe91bf74775d70a6224d14eb7c1d52ad736da2d4da68accd431b421eacc", Pod:"goldmane-cccfbd5cf-w9cxz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ccfa2af1f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:11.431 [INFO][6212] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:11.433 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" iface="eth0" netns="" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:11.436 [INFO][6212] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:11.436 [INFO][6212] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.088 [INFO][6224] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.127 [INFO][6224] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.132 [INFO][6224] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.397 [WARNING][6224] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.397 [INFO][6224] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" HandleID="k8s-pod-network.71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Workload="localhost-k8s-goldmane--cccfbd5cf--w9cxz-eth0" Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.499 [INFO][6224] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:12.508205 containerd[1458]: 2026-04-14 13:34:12.503 [INFO][6212] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3" Apr 14 13:34:12.508205 containerd[1458]: time="2026-04-14T13:34:12.508104087Z" level=info msg="TearDown network for sandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" successfully" Apr 14 13:34:12.585255 containerd[1458]: time="2026-04-14T13:34:12.584281280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:12.585255 containerd[1458]: time="2026-04-14T13:34:12.584648146Z" level=info msg="RemovePodSandbox \"71e651faa6ca4984c49458f2e611db80119bd4ee0ab732fdf77e1dda3b71f9c3\" returns successfully" Apr 14 13:34:12.589009 containerd[1458]: time="2026-04-14T13:34:12.588105273Z" level=info msg="StopPodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\"" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:13.227 [WARNING][6241] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"7a48f0b0-2c86-41cd-b28b-7d4223f81409", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe", Pod:"calico-apiserver-677c4b66cd-p7zmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5d2c6cf735b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:13.228 [INFO][6241] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:13.228 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" iface="eth0" netns="" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:13.228 [INFO][6241] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:13.228 [INFO][6241] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.232 [INFO][6250] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.234 [INFO][6250] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.234 [INFO][6250] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.415 [WARNING][6250] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.415 [INFO][6250] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.490 [INFO][6250] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:14.594350 containerd[1458]: 2026-04-14 13:34:14.564 [INFO][6241] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:14.595158 containerd[1458]: time="2026-04-14T13:34:14.594500148Z" level=info msg="TearDown network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" successfully" Apr 14 13:34:14.595158 containerd[1458]: time="2026-04-14T13:34:14.594537709Z" level=info msg="StopPodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" returns successfully" Apr 14 13:34:14.619807 containerd[1458]: time="2026-04-14T13:34:14.616793926Z" level=info msg="RemovePodSandbox for \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\"" Apr 14 13:34:14.619807 containerd[1458]: time="2026-04-14T13:34:14.616837966Z" level=info msg="Forcibly stopping sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\"" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.175 [WARNING][6271] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0", GenerateName:"calico-apiserver-677c4b66cd-", Namespace:"calico-system", SelfLink:"", UID:"7a48f0b0-2c86-41cd-b28b-7d4223f81409", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677c4b66cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47f8cd6b78ee808b92f7be5d6255c8920cde84e324255db480a42a3045825bbe", Pod:"calico-apiserver-677c4b66cd-p7zmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5d2c6cf735b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.178 [INFO][6271] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.178 [INFO][6271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" iface="eth0" netns="" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.180 [INFO][6271] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.180 [INFO][6271] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.892 [INFO][6280] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.899 [INFO][6280] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.904 [INFO][6280] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.963 [WARNING][6280] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:15.966 [INFO][6280] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" HandleID="k8s-pod-network.5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Workload="localhost-k8s-calico--apiserver--677c4b66cd--p7zmd-eth0" Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:16.052 [INFO][6280] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:16.063415 containerd[1458]: 2026-04-14 13:34:16.056 [INFO][6271] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7" Apr 14 13:34:16.063415 containerd[1458]: time="2026-04-14T13:34:16.062700667Z" level=info msg="TearDown network for sandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" successfully" Apr 14 13:34:16.081290 containerd[1458]: time="2026-04-14T13:34:16.075865763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:16.081290 containerd[1458]: time="2026-04-14T13:34:16.076565256Z" level=info msg="RemovePodSandbox \"5a0040b9710592c583abfd7828f2325049f31f1158404578eee8681c8ef7dfd7\" returns successfully" Apr 14 13:34:26.601016 kubelet[2512]: E0414 13:34:26.600766 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:28.612799 kubelet[2512]: E0414 13:34:28.611838 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:32.592382 kubelet[2512]: E0414 13:34:32.592339 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:33.599702 kubelet[2512]: E0414 13:34:33.599072 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:37.584340 kubelet[2512]: E0414 13:34:37.584214 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:39.666175 systemd[1]: run-containerd-runc-k8s.io-22a80e036de42cb696cb4d3b686685b69e09246abc367a182c45bcc4a00c874e-runc.5eTWHP.mount: Deactivated successfully. Apr 14 13:34:45.460098 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:35800.service - OpenSSH per-connection server daemon (10.0.0.1:35800). Apr 14 13:34:45.877382 sshd[6378]: Accepted publickey for core from 10.0.0.1 port 35800 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:45.885403 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:45.951672 systemd-logind[1445]: New session 8 of user core. Apr 14 13:34:45.969045 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 13:34:48.192271 sshd[6378]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:48.211363 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:35800.service: Deactivated successfully. Apr 14 13:34:48.214254 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Apr 14 13:34:48.236118 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 13:34:48.253492 systemd[1]: session-8.scope: Consumed 1.269s CPU time. Apr 14 13:34:48.264220 systemd-logind[1445]: Removed session 8. Apr 14 13:34:53.261345 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:52976.service - OpenSSH per-connection server daemon (10.0.0.1:52976). Apr 14 13:34:53.443353 sshd[6438]: Accepted publickey for core from 10.0.0.1 port 52976 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:53.454691 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:53.472039 systemd-logind[1445]: New session 9 of user core. Apr 14 13:34:53.482569 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 13:34:54.427958 sshd[6438]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:54.432500 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:52976.service: Deactivated successfully. Apr 14 13:34:54.435182 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 13:34:54.441620 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Apr 14 13:34:54.446468 systemd-logind[1445]: Removed session 9. Apr 14 13:34:54.582049 kubelet[2512]: E0414 13:34:54.580094 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:57.580937 kubelet[2512]: E0414 13:34:57.580343 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:59.450175 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:52962.service - OpenSSH per-connection server daemon (10.0.0.1:52962). Apr 14 13:34:59.641097 sshd[6478]: Accepted publickey for core from 10.0.0.1 port 52962 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:59.643033 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:59.648190 systemd-logind[1445]: New session 10 of user core. Apr 14 13:34:59.667735 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 13:35:00.662749 sshd[6478]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:00.682982 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:52962.service: Deactivated successfully. Apr 14 13:35:00.693826 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 13:35:00.700045 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Apr 14 13:35:00.725321 systemd-logind[1445]: Removed session 10. Apr 14 13:35:05.760988 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:52964.service - OpenSSH per-connection server daemon (10.0.0.1:52964). Apr 14 13:35:05.964806 sshd[6565]: Accepted publickey for core from 10.0.0.1 port 52964 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:05.967024 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:06.037399 systemd-logind[1445]: New session 11 of user core. Apr 14 13:35:06.064118 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 13:35:06.999807 sshd[6565]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:07.015354 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:52964.service: Deactivated successfully. Apr 14 13:35:07.016668 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Apr 14 13:35:07.020085 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 13:35:07.027134 systemd-logind[1445]: Removed session 11. Apr 14 13:35:12.041260 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:41988.service - OpenSSH per-connection server daemon (10.0.0.1:41988). Apr 14 13:35:12.200895 sshd[6603]: Accepted publickey for core from 10.0.0.1 port 41988 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:12.218950 sshd[6603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:12.240040 systemd-logind[1445]: New session 12 of user core. Apr 14 13:35:12.261592 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 13:35:12.979227 sshd[6603]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:12.992683 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:41988.service: Deactivated successfully. Apr 14 13:35:13.045357 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 13:35:13.066707 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Apr 14 13:35:13.071862 systemd-logind[1445]: Removed session 12. Apr 14 13:35:18.055879 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:42002.service - OpenSSH per-connection server daemon (10.0.0.1:42002). Apr 14 13:35:18.237492 sshd[6618]: Accepted publickey for core from 10.0.0.1 port 42002 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:18.239877 sshd[6618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:18.262641 systemd-logind[1445]: New session 13 of user core. Apr 14 13:35:18.278683 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 13:35:18.706835 sshd[6618]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:18.718263 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:42002.service: Deactivated successfully. Apr 14 13:35:18.723726 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 13:35:18.724522 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Apr 14 13:35:18.725737 systemd-logind[1445]: Removed session 13. Apr 14 13:35:23.794457 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). Apr 14 13:35:23.883960 sshd[6633]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:23.901291 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:23.981748 systemd-logind[1445]: New session 14 of user core. Apr 14 13:35:23.992846 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 13:35:24.842061 sshd[6633]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:24.861052 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:55734.service: Deactivated successfully. Apr 14 13:35:24.879773 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 13:35:24.887715 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Apr 14 13:35:24.892325 systemd-logind[1445]: Removed session 14. Apr 14 13:35:29.631157 kubelet[2512]: E0414 13:35:29.630340 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:29.962212 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:56318.service - OpenSSH per-connection server daemon (10.0.0.1:56318). Apr 14 13:35:30.085535 sshd[6669]: Accepted publickey for core from 10.0.0.1 port 56318 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:30.099148 sshd[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:30.181049 systemd-logind[1445]: New session 15 of user core. Apr 14 13:35:30.196069 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 13:35:30.999311 sshd[6669]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:31.015129 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:56318.service: Deactivated successfully. Apr 14 13:35:31.028269 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 13:35:31.029369 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Apr 14 13:35:31.032820 systemd-logind[1445]: Removed session 15. Apr 14 13:35:33.593709 kubelet[2512]: E0414 13:35:33.590592 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:36.055552 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:56324.service - OpenSSH per-connection server daemon (10.0.0.1:56324). Apr 14 13:35:36.237306 sshd[6709]: Accepted publickey for core from 10.0.0.1 port 56324 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:36.242489 sshd[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:36.267036 systemd-logind[1445]: New session 16 of user core. Apr 14 13:35:36.274895 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 13:35:36.697683 kubelet[2512]: E0414 13:35:36.696366 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:37.264884 sshd[6709]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:37.312463 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:56324.service: Deactivated successfully. Apr 14 13:35:37.320514 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 13:35:37.321327 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Apr 14 13:35:37.325738 systemd-logind[1445]: Removed session 16. Apr 14 13:35:42.288388 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:59286.service - OpenSSH per-connection server daemon (10.0.0.1:59286). Apr 14 13:35:42.476533 sshd[6744]: Accepted publickey for core from 10.0.0.1 port 59286 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:42.484525 sshd[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:42.496562 systemd-logind[1445]: New session 17 of user core. Apr 14 13:35:42.507496 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 13:35:43.443468 sshd[6744]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:43.454813 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:59286.service: Deactivated successfully. Apr 14 13:35:43.472213 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 13:35:43.494088 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Apr 14 13:35:43.580868 systemd-logind[1445]: Removed session 17. Apr 14 13:35:48.532443 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:59290.service - OpenSSH per-connection server daemon (10.0.0.1:59290). Apr 14 13:35:49.171101 sshd[6759]: Accepted publickey for core from 10.0.0.1 port 59290 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:49.179306 sshd[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:49.198304 systemd-logind[1445]: New session 18 of user core. Apr 14 13:35:49.244758 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 13:35:49.934127 sshd[6759]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:49.943632 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:59290.service: Deactivated successfully. Apr 14 13:35:49.955352 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 13:35:49.976474 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Apr 14 13:35:49.979743 systemd-logind[1445]: Removed session 18. Apr 14 13:35:55.026844 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:33064.service - OpenSSH per-connection server daemon (10.0.0.1:33064). Apr 14 13:35:55.216823 sshd[6784]: Accepted publickey for core from 10.0.0.1 port 33064 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:55.221051 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:55.237360 systemd-logind[1445]: New session 19 of user core. Apr 14 13:35:55.281507 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 13:35:56.381575 sshd[6784]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:56.391130 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:33064.service: Deactivated successfully. Apr 14 13:35:56.394811 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 13:35:56.399785 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Apr 14 13:35:56.405007 systemd-logind[1445]: Removed session 19. Apr 14 13:35:56.588297 kubelet[2512]: E0414 13:35:56.588083 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:00.591424 kubelet[2512]: E0414 13:36:00.590729 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:01.422275 systemd[1]: Started sshd@19-10.0.0.9:22-10.0.0.1:55336.service - OpenSSH per-connection server daemon (10.0.0.1:55336). Apr 14 13:36:01.563338 sshd[6864]: Accepted publickey for core from 10.0.0.1 port 55336 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:01.572981 sshd[6864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:01.614558 systemd-logind[1445]: New session 20 of user core. Apr 14 13:36:01.627747 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 13:36:02.599002 sshd[6864]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:02.662347 systemd[1]: sshd@19-10.0.0.9:22-10.0.0.1:55336.service: Deactivated successfully. Apr 14 13:36:02.671456 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 13:36:02.673632 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Apr 14 13:36:02.675053 systemd-logind[1445]: Removed session 20. Apr 14 13:36:07.677707 systemd[1]: Started sshd@20-10.0.0.9:22-10.0.0.1:55352.service - OpenSSH per-connection server daemon (10.0.0.1:55352). Apr 14 13:36:07.891443 sshd[6902]: Accepted publickey for core from 10.0.0.1 port 55352 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:07.894521 sshd[6902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:07.900452 systemd-logind[1445]: New session 21 of user core. Apr 14 13:36:07.916046 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 13:36:08.851891 sshd[6902]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:08.870669 systemd[1]: sshd@20-10.0.0.9:22-10.0.0.1:55352.service: Deactivated successfully. Apr 14 13:36:08.900189 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 13:36:08.908598 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Apr 14 13:36:08.915691 systemd-logind[1445]: Removed session 21. Apr 14 13:36:13.922976 systemd[1]: Started sshd@21-10.0.0.9:22-10.0.0.1:42544.service - OpenSSH per-connection server daemon (10.0.0.1:42544). Apr 14 13:36:14.076226 sshd[6937]: Accepted publickey for core from 10.0.0.1 port 42544 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:14.092953 sshd[6937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:14.171861 systemd-logind[1445]: New session 22 of user core. Apr 14 13:36:14.189583 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 13:36:14.583283 kubelet[2512]: E0414 13:36:14.583141 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:15.024469 sshd[6937]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:15.033433 systemd[1]: sshd@21-10.0.0.9:22-10.0.0.1:42544.service: Deactivated successfully. Apr 14 13:36:15.035848 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 13:36:15.041008 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Apr 14 13:36:15.045595 systemd-logind[1445]: Removed session 22. Apr 14 13:36:20.190086 systemd[1]: Started sshd@22-10.0.0.9:22-10.0.0.1:47020.service - OpenSSH per-connection server daemon (10.0.0.1:47020). Apr 14 13:36:20.383836 sshd[6957]: Accepted publickey for core from 10.0.0.1 port 47020 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:20.390680 sshd[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:20.460112 systemd-logind[1445]: New session 23 of user core. Apr 14 13:36:20.476392 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 13:36:21.084862 sshd[6957]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:21.134646 systemd[1]: sshd@22-10.0.0.9:22-10.0.0.1:47020.service: Deactivated successfully. Apr 14 13:36:21.143007 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 13:36:21.143670 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Apr 14 13:36:21.145047 systemd-logind[1445]: Removed session 23. Apr 14 13:36:24.597733 kubelet[2512]: E0414 13:36:24.597291 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:26.153680 systemd[1]: Started sshd@23-10.0.0.9:22-10.0.0.1:47026.service - OpenSSH per-connection server daemon (10.0.0.1:47026). Apr 14 13:36:26.373289 sshd[7008]: Accepted publickey for core from 10.0.0.1 port 47026 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:26.457692 sshd[7008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:26.493290 systemd-logind[1445]: New session 24 of user core. Apr 14 13:36:26.512721 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 13:36:27.658750 sshd[7008]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:27.670564 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Apr 14 13:36:27.671096 systemd[1]: sshd@23-10.0.0.9:22-10.0.0.1:47026.service: Deactivated successfully. Apr 14 13:36:27.680609 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 13:36:27.682828 systemd-logind[1445]: Removed session 24. Apr 14 13:36:32.687650 systemd[1]: Started sshd@24-10.0.0.9:22-10.0.0.1:52092.service - OpenSSH per-connection server daemon (10.0.0.1:52092). Apr 14 13:36:32.861969 sshd[7072]: Accepted publickey for core from 10.0.0.1 port 52092 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:32.871973 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:32.975483 systemd-logind[1445]: New session 25 of user core. Apr 14 13:36:32.998712 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 13:36:33.574420 sshd[7072]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:33.580191 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Apr 14 13:36:33.580616 systemd[1]: sshd@24-10.0.0.9:22-10.0.0.1:52092.service: Deactivated successfully. Apr 14 13:36:33.594320 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 13:36:33.595849 systemd-logind[1445]: Removed session 25. Apr 14 13:36:38.653739 systemd[1]: Started sshd@25-10.0.0.9:22-10.0.0.1:52106.service - OpenSSH per-connection server daemon (10.0.0.1:52106). Apr 14 13:36:38.878297 sshd[7087]: Accepted publickey for core from 10.0.0.1 port 52106 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:38.878870 sshd[7087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:38.899126 systemd-logind[1445]: New session 26 of user core. Apr 14 13:36:38.961203 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 13:36:39.787788 sshd[7087]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:39.828674 systemd[1]: sshd@25-10.0.0.9:22-10.0.0.1:52106.service: Deactivated successfully. Apr 14 13:36:39.836637 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 13:36:39.843502 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Apr 14 13:36:39.846840 systemd-logind[1445]: Removed session 26. Apr 14 13:36:40.593062 kubelet[2512]: E0414 13:36:40.592292 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:44.914861 systemd[1]: Started sshd@26-10.0.0.9:22-10.0.0.1:35946.service - OpenSSH per-connection server daemon (10.0.0.1:35946). Apr 14 13:36:45.078281 sshd[7121]: Accepted publickey for core from 10.0.0.1 port 35946 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:45.088006 sshd[7121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:45.138860 systemd-logind[1445]: New session 27 of user core. Apr 14 13:36:45.158378 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 13:36:45.584770 kubelet[2512]: E0414 13:36:45.584708 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:45.586271 kubelet[2512]: E0414 13:36:45.585779 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:45.885721 sshd[7121]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:45.911899 systemd[1]: sshd@26-10.0.0.9:22-10.0.0.1:35946.service: Deactivated successfully. Apr 14 13:36:45.923614 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 13:36:45.948057 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Apr 14 13:36:45.965388 systemd-logind[1445]: Removed session 27. Apr 14 13:36:50.997795 systemd[1]: Started sshd@27-10.0.0.9:22-10.0.0.1:52682.service - OpenSSH per-connection server daemon (10.0.0.1:52682). Apr 14 13:36:51.088204 sshd[7136]: Accepted publickey for core from 10.0.0.1 port 52682 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:51.099730 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:51.177764 systemd-logind[1445]: New session 28 of user core. Apr 14 13:36:51.190634 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 13:36:51.536198 sshd[7136]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:51.567454 systemd[1]: sshd@27-10.0.0.9:22-10.0.0.1:52682.service: Deactivated successfully. Apr 14 13:36:51.574898 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 13:36:51.577300 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Apr 14 13:36:51.580847 systemd-logind[1445]: Removed session 28. Apr 14 13:36:56.593401 systemd[1]: Started sshd@28-10.0.0.9:22-10.0.0.1:52694.service - OpenSSH per-connection server daemon (10.0.0.1:52694). Apr 14 13:36:56.854719 sshd[7154]: Accepted publickey for core from 10.0.0.1 port 52694 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:56.856705 sshd[7154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:56.894102 systemd-logind[1445]: New session 29 of user core. Apr 14 13:36:56.933832 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 13:36:57.489684 sshd[7154]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:57.492800 systemd[1]: sshd@28-10.0.0.9:22-10.0.0.1:52694.service: Deactivated successfully. Apr 14 13:36:57.503695 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 13:36:57.506185 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Apr 14 13:36:57.534355 systemd-logind[1445]: Removed session 29. Apr 14 13:37:02.532285 systemd[1]: Started sshd@29-10.0.0.9:22-10.0.0.1:42552.service - OpenSSH per-connection server daemon (10.0.0.1:42552). Apr 14 13:37:02.587112 kubelet[2512]: E0414 13:37:02.586979 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:02.664432 sshd[7261]: Accepted publickey for core from 10.0.0.1 port 42552 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:02.668025 sshd[7261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:02.696498 systemd-logind[1445]: New session 30 of user core. Apr 14 13:37:02.723587 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 13:37:03.546557 sshd[7261]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:03.549703 systemd[1]: sshd@29-10.0.0.9:22-10.0.0.1:42552.service: Deactivated successfully. Apr 14 13:37:03.564569 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 13:37:03.567323 systemd-logind[1445]: Session 30 logged out. Waiting for processes to exit. Apr 14 13:37:03.586761 systemd-logind[1445]: Removed session 30. Apr 14 13:37:07.592476 kubelet[2512]: E0414 13:37:07.592398 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:08.655644 systemd[1]: Started sshd@30-10.0.0.9:22-10.0.0.1:42554.service - OpenSSH per-connection server daemon (10.0.0.1:42554). Apr 14 13:37:08.829038 sshd[7276]: Accepted publickey for core from 10.0.0.1 port 42554 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:08.832738 sshd[7276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:08.877394 systemd-logind[1445]: New session 31 of user core. Apr 14 13:37:08.906842 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 13:37:09.747224 sshd[7276]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:09.756833 systemd[1]: sshd@30-10.0.0.9:22-10.0.0.1:42554.service: Deactivated successfully. Apr 14 13:37:09.775661 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 13:37:09.789386 systemd-logind[1445]: Session 31 logged out. Waiting for processes to exit. Apr 14 13:37:09.794636 systemd-logind[1445]: Removed session 31. Apr 14 13:37:14.824720 systemd[1]: Started sshd@31-10.0.0.9:22-10.0.0.1:36914.service - OpenSSH per-connection server daemon (10.0.0.1:36914). Apr 14 13:37:15.061258 sshd[7311]: Accepted publickey for core from 10.0.0.1 port 36914 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:15.097613 sshd[7311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:15.144956 systemd-logind[1445]: New session 32 of user core. Apr 14 13:37:15.155677 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 13:37:15.484779 sshd[7311]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:15.491090 systemd[1]: sshd@31-10.0.0.9:22-10.0.0.1:36914.service: Deactivated successfully. Apr 14 13:37:15.492876 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 13:37:15.493599 systemd-logind[1445]: Session 32 logged out. Waiting for processes to exit. Apr 14 13:37:15.494898 systemd-logind[1445]: Removed session 32. Apr 14 13:37:20.613296 systemd[1]: Started sshd@32-10.0.0.9:22-10.0.0.1:57740.service - OpenSSH per-connection server daemon (10.0.0.1:57740). Apr 14 13:37:20.720157 sshd[7326]: Accepted publickey for core from 10.0.0.1 port 57740 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:20.728633 sshd[7326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:20.801385 systemd-logind[1445]: New session 33 of user core. Apr 14 13:37:20.807355 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 13:37:21.564031 sshd[7326]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:21.590420 systemd[1]: sshd@32-10.0.0.9:22-10.0.0.1:57740.service: Deactivated successfully. Apr 14 13:37:21.658618 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 13:37:21.664799 systemd-logind[1445]: Session 33 logged out. Waiting for processes to exit. Apr 14 13:37:21.672355 systemd-logind[1445]: Removed session 33. Apr 14 13:37:26.622429 systemd[1]: Started sshd@33-10.0.0.9:22-10.0.0.1:57752.service - OpenSSH per-connection server daemon (10.0.0.1:57752). Apr 14 13:37:26.827658 sshd[7343]: Accepted publickey for core from 10.0.0.1 port 57752 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:26.831408 sshd[7343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:26.872396 systemd-logind[1445]: New session 34 of user core. Apr 14 13:37:26.887785 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 13:37:28.084438 systemd[1]: run-containerd-runc-k8s.io-40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33-runc.c2a75E.mount: Deactivated successfully. Apr 14 13:37:28.257287 sshd[7343]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:28.262800 systemd[1]: sshd@33-10.0.0.9:22-10.0.0.1:57752.service: Deactivated successfully. Apr 14 13:37:28.272285 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 13:37:28.277889 systemd-logind[1445]: Session 34 logged out. Waiting for processes to exit. Apr 14 13:37:28.282757 systemd-logind[1445]: Removed session 34. Apr 14 13:37:33.327870 systemd[1]: Started sshd@34-10.0.0.9:22-10.0.0.1:44044.service - OpenSSH per-connection server daemon (10.0.0.1:44044). Apr 14 13:37:33.469361 sshd[7402]: Accepted publickey for core from 10.0.0.1 port 44044 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:33.471146 sshd[7402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:33.485488 systemd-logind[1445]: New session 35 of user core. Apr 14 13:37:33.510440 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 13:37:34.897521 sshd[7402]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:34.925802 systemd-logind[1445]: Session 35 logged out. Waiting for processes to exit. Apr 14 13:37:34.928477 systemd[1]: sshd@34-10.0.0.9:22-10.0.0.1:44044.service: Deactivated successfully. Apr 14 13:37:34.949531 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 13:37:34.971081 systemd-logind[1445]: Removed session 35. Apr 14 13:37:38.599398 kubelet[2512]: E0414 13:37:38.599235 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:39.986477 systemd[1]: Started sshd@35-10.0.0.9:22-10.0.0.1:55420.service - OpenSSH per-connection server daemon (10.0.0.1:55420). Apr 14 13:37:40.279535 sshd[7431]: Accepted publickey for core from 10.0.0.1 port 55420 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:40.291242 sshd[7431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:40.330306 systemd-logind[1445]: New session 36 of user core. Apr 14 13:37:40.343187 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 13:37:41.344636 sshd[7431]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:41.470726 systemd[1]: sshd@35-10.0.0.9:22-10.0.0.1:55420.service: Deactivated successfully. Apr 14 13:37:41.472327 systemd-logind[1445]: Session 36 logged out. Waiting for processes to exit. Apr 14 13:37:41.525172 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 13:37:41.541383 systemd-logind[1445]: Removed session 36. Apr 14 13:37:46.481343 systemd[1]: Started sshd@36-10.0.0.9:22-10.0.0.1:55434.service - OpenSSH per-connection server daemon (10.0.0.1:55434). Apr 14 13:37:46.732333 sshd[7450]: Accepted publickey for core from 10.0.0.1 port 55434 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:46.750734 sshd[7450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:46.777264 systemd-logind[1445]: New session 37 of user core. Apr 14 13:37:46.792192 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 13:37:47.817978 sshd[7450]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:47.830223 systemd-logind[1445]: Session 37 logged out. Waiting for processes to exit. Apr 14 13:37:47.830468 systemd[1]: sshd@36-10.0.0.9:22-10.0.0.1:55434.service: Deactivated successfully. Apr 14 13:37:47.845197 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 13:37:47.861881 systemd-logind[1445]: Removed session 37. Apr 14 13:37:52.587445 kubelet[2512]: E0414 13:37:52.586240 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:52.961170 systemd[1]: Started sshd@37-10.0.0.9:22-10.0.0.1:48662.service - OpenSSH per-connection server daemon (10.0.0.1:48662). Apr 14 13:37:53.252008 sshd[7466]: Accepted publickey for core from 10.0.0.1 port 48662 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:53.264028 sshd[7466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:53.286735 systemd-logind[1445]: New session 38 of user core. Apr 14 13:37:53.341802 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 13:37:54.272456 sshd[7466]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:54.353524 systemd[1]: sshd@37-10.0.0.9:22-10.0.0.1:48662.service: Deactivated successfully. Apr 14 13:37:54.361288 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 13:37:54.373724 systemd-logind[1445]: Session 38 logged out. Waiting for processes to exit. Apr 14 13:37:54.384171 systemd-logind[1445]: Removed session 38. Apr 14 13:37:56.635002 kubelet[2512]: E0414 13:37:56.634708 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:59.449836 systemd[1]: Started sshd@38-10.0.0.9:22-10.0.0.1:48672.service - OpenSSH per-connection server daemon (10.0.0.1:48672). Apr 14 13:37:59.701425 sshd[7539]: Accepted publickey for core from 10.0.0.1 port 48672 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:59.702813 sshd[7539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:59.794796 systemd-logind[1445]: New session 39 of user core. Apr 14 13:37:59.833664 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 13:37:59.896302 systemd[1]: run-containerd-runc-k8s.io-40902379f694e5d8c92aa5c46c67f85dd3ad71563dbb84af6a08f93151a15e33-runc.sxnYr5.mount: Deactivated successfully. Apr 14 13:38:00.790165 sshd[7539]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:00.823588 systemd[1]: sshd@38-10.0.0.9:22-10.0.0.1:48672.service: Deactivated successfully. Apr 14 13:38:00.884682 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 13:38:00.893801 systemd-logind[1445]: Session 39 logged out. Waiting for processes to exit. Apr 14 13:38:00.940088 systemd-logind[1445]: Removed session 39. Apr 14 13:38:01.869002 systemd[1]: run-containerd-runc-k8s.io-9c5214f1c2e833a1902660eeea2da65e7e1e4d66f90fdbce186032f52c764a69-runc.nwrcb6.mount: Deactivated successfully. Apr 14 13:38:02.584798 kubelet[2512]: E0414 13:38:02.584677 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:05.838704 systemd[1]: Started sshd@39-10.0.0.9:22-10.0.0.1:55284.service - OpenSSH per-connection server daemon (10.0.0.1:55284). Apr 14 13:38:06.045114 sshd[7618]: Accepted publickey for core from 10.0.0.1 port 55284 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:06.066866 sshd[7618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:06.166265 systemd-logind[1445]: New session 40 of user core. Apr 14 13:38:06.186308 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 13:38:06.888734 sshd[7618]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:06.958528 systemd[1]: sshd@39-10.0.0.9:22-10.0.0.1:55284.service: Deactivated successfully. Apr 14 13:38:06.997013 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 13:38:07.004454 systemd-logind[1445]: Session 40 logged out. Waiting for processes to exit. Apr 14 13:38:07.008598 systemd-logind[1445]: Removed session 40. Apr 14 13:38:11.581332 kubelet[2512]: E0414 13:38:11.581208 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:11.916165 systemd[1]: Started sshd@40-10.0.0.9:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Apr 14 13:38:11.957399 sshd[7662]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:11.960557 sshd[7662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:11.979315 systemd-logind[1445]: New session 41 of user core. Apr 14 13:38:11.985270 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 13:38:12.271289 sshd[7662]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:12.277726 systemd[1]: sshd@40-10.0.0.9:22-10.0.0.1:45122.service: Deactivated successfully. Apr 14 13:38:12.282787 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 13:38:12.284272 systemd-logind[1445]: Session 41 logged out. Waiting for processes to exit. Apr 14 13:38:12.285394 systemd-logind[1445]: Removed session 41. Apr 14 13:38:15.580952 kubelet[2512]: E0414 13:38:15.580752 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:17.289972 systemd[1]: Started sshd@41-10.0.0.9:22-10.0.0.1:45128.service - OpenSSH per-connection server daemon (10.0.0.1:45128). Apr 14 13:38:17.334923 sshd[7690]: Accepted publickey for core from 10.0.0.1 port 45128 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:17.336363 sshd[7690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:17.340068 systemd-logind[1445]: New session 42 of user core. Apr 14 13:38:17.349067 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 13:38:17.501341 sshd[7690]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:17.506037 systemd[1]: sshd@41-10.0.0.9:22-10.0.0.1:45128.service: Deactivated successfully. Apr 14 13:38:17.509314 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 13:38:17.510397 systemd-logind[1445]: Session 42 logged out. Waiting for processes to exit. Apr 14 13:38:17.511544 systemd-logind[1445]: Removed session 42. Apr 14 13:38:20.580493 kubelet[2512]: E0414 13:38:20.580415 2512 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:22.515991 systemd[1]: Started sshd@42-10.0.0.9:22-10.0.0.1:56244.service - OpenSSH per-connection server daemon (10.0.0.1:56244). Apr 14 13:38:22.554087 sshd[7705]: Accepted publickey for core from 10.0.0.1 port 56244 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:22.555628 sshd[7705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:22.563503 systemd-logind[1445]: New session 43 of user core. Apr 14 13:38:22.572286 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 13:38:22.673816 sshd[7705]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:22.684813 systemd[1]: sshd@42-10.0.0.9:22-10.0.0.1:56244.service: Deactivated successfully. Apr 14 13:38:22.686484 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 13:38:22.687799 systemd-logind[1445]: Session 43 logged out. Waiting for processes to exit. Apr 14 13:38:22.689249 systemd[1]: Started sshd@43-10.0.0.9:22-10.0.0.1:56252.service - OpenSSH per-connection server daemon (10.0.0.1:56252). Apr 14 13:38:22.690489 systemd-logind[1445]: Removed session 43. Apr 14 13:38:22.748861 sshd[7720]: Accepted publickey for core from 10.0.0.1 port 56252 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:22.751882 sshd[7720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:22.759978 systemd-logind[1445]: New session 44 of user core. Apr 14 13:38:22.767159 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 13:38:22.958766 sshd[7720]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:22.966488 systemd[1]: sshd@43-10.0.0.9:22-10.0.0.1:56252.service: Deactivated successfully. Apr 14 13:38:22.967835 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 13:38:22.971336 systemd-logind[1445]: Session 44 logged out. Waiting for processes to exit. Apr 14 13:38:22.980545 systemd[1]: Started sshd@44-10.0.0.9:22-10.0.0.1:56256.service - OpenSSH per-connection server daemon (10.0.0.1:56256). Apr 14 13:38:22.982319 systemd-logind[1445]: Removed session 44. Apr 14 13:38:23.019009 sshd[7733]: Accepted publickey for core from 10.0.0.1 port 56256 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:23.021599 sshd[7733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:23.026314 systemd-logind[1445]: New session 45 of user core. Apr 14 13:38:23.037592 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 13:38:23.148287 sshd[7733]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:23.150984 systemd[1]: sshd@44-10.0.0.9:22-10.0.0.1:56256.service: Deactivated successfully. Apr 14 13:38:23.152276 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 13:38:23.152806 systemd-logind[1445]: Session 45 logged out. Waiting for processes to exit. Apr 14 13:38:23.153564 systemd-logind[1445]: Removed session 45. Apr 14 13:38:28.170156 systemd[1]: Started sshd@45-10.0.0.9:22-10.0.0.1:56266.service - OpenSSH per-connection server daemon (10.0.0.1:56266). Apr 14 13:38:28.200815 sshd[7769]: Accepted publickey for core from 10.0.0.1 port 56266 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:28.202349 sshd[7769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:28.205984 systemd-logind[1445]: New session 46 of user core. Apr 14 13:38:28.220097 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 13:38:28.350683 sshd[7769]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:28.358103 systemd[1]: sshd@45-10.0.0.9:22-10.0.0.1:56266.service: Deactivated successfully. Apr 14 13:38:28.359415 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 13:38:28.360449 systemd-logind[1445]: Session 46 logged out. Waiting for processes to exit. Apr 14 13:38:28.366154 systemd[1]: Started sshd@46-10.0.0.9:22-10.0.0.1:56278.service - OpenSSH per-connection server daemon (10.0.0.1:56278). Apr 14 13:38:28.366810 systemd-logind[1445]: Removed session 46. Apr 14 13:38:28.400590 sshd[7783]: Accepted publickey for core from 10.0.0.1 port 56278 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:28.401778 sshd[7783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:28.405419 systemd-logind[1445]: New session 47 of user core. Apr 14 13:38:28.413058 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 13:38:28.717888 sshd[7783]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:28.723057 systemd[1]: sshd@46-10.0.0.9:22-10.0.0.1:56278.service: Deactivated successfully. Apr 14 13:38:28.724336 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 13:38:28.724789 systemd-logind[1445]: Session 47 logged out. Waiting for processes to exit. Apr 14 13:38:28.726157 systemd[1]: Started sshd@47-10.0.0.9:22-10.0.0.1:56288.service - OpenSSH per-connection server daemon (10.0.0.1:56288). Apr 14 13:38:28.726686 systemd-logind[1445]: Removed session 47. Apr 14 13:38:28.765243 sshd[7796]: Accepted publickey for core from 10.0.0.1 port 56288 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:28.767312 sshd[7796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:28.771944 systemd-logind[1445]: New session 48 of user core. Apr 14 13:38:28.776058 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 14 13:38:29.273945 sshd[7796]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:29.283612 systemd[1]: sshd@47-10.0.0.9:22-10.0.0.1:56288.service: Deactivated successfully. Apr 14 13:38:29.285391 systemd[1]: session-48.scope: Deactivated successfully. Apr 14 13:38:29.286642 systemd-logind[1445]: Session 48 logged out. Waiting for processes to exit. Apr 14 13:38:29.294226 systemd[1]: Started sshd@48-10.0.0.9:22-10.0.0.1:56300.service - OpenSSH per-connection server daemon (10.0.0.1:56300). Apr 14 13:38:29.298106 systemd-logind[1445]: Removed session 48. Apr 14 13:38:29.351756 sshd[7824]: Accepted publickey for core from 10.0.0.1 port 56300 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:29.354204 sshd[7824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:29.359304 systemd-logind[1445]: New session 49 of user core. Apr 14 13:38:29.371158 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 14 13:38:29.713325 sshd[7824]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:29.726169 systemd[1]: sshd@48-10.0.0.9:22-10.0.0.1:56300.service: Deactivated successfully. Apr 14 13:38:29.728018 systemd[1]: session-49.scope: Deactivated successfully. Apr 14 13:38:29.730115 systemd-logind[1445]: Session 49 logged out. Waiting for processes to exit. Apr 14 13:38:29.735940 systemd[1]: Started sshd@49-10.0.0.9:22-10.0.0.1:57320.service - OpenSSH per-connection server daemon (10.0.0.1:57320). Apr 14 13:38:29.739121 systemd-logind[1445]: Removed session 49. Apr 14 13:38:29.771939 sshd[7836]: Accepted publickey for core from 10.0.0.1 port 57320 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:29.773505 sshd[7836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:29.778450 systemd-logind[1445]: New session 50 of user core. Apr 14 13:38:29.788228 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 14 13:38:29.953116 sshd[7836]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:29.955974 systemd[1]: sshd@49-10.0.0.9:22-10.0.0.1:57320.service: Deactivated successfully. Apr 14 13:38:29.957491 systemd[1]: session-50.scope: Deactivated successfully. Apr 14 13:38:29.958246 systemd-logind[1445]: Session 50 logged out. Waiting for processes to exit. Apr 14 13:38:29.959323 systemd-logind[1445]: Removed session 50. Apr 14 13:38:34.965454 systemd[1]: Started sshd@50-10.0.0.9:22-10.0.0.1:57334.service - OpenSSH per-connection server daemon (10.0.0.1:57334). Apr 14 13:38:35.002955 sshd[7899]: Accepted publickey for core from 10.0.0.1 port 57334 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:35.005363 sshd[7899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:35.011415 systemd-logind[1445]: New session 51 of user core. Apr 14 13:38:35.020537 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 14 13:38:35.141874 sshd[7899]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:35.144824 systemd[1]: sshd@50-10.0.0.9:22-10.0.0.1:57334.service: Deactivated successfully. Apr 14 13:38:35.146448 systemd[1]: session-51.scope: Deactivated successfully. Apr 14 13:38:35.147050 systemd-logind[1445]: Session 51 logged out. Waiting for processes to exit. Apr 14 13:38:35.147862 systemd-logind[1445]: Removed session 51. Apr 14 13:38:40.153482 systemd[1]: Started sshd@51-10.0.0.9:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Apr 14 13:38:40.188021 sshd[7933]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:40.189182 sshd[7933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:40.192469 systemd-logind[1445]: New session 52 of user core. Apr 14 13:38:40.200087 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 14 13:38:40.297288 sshd[7933]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:40.299978 systemd[1]: sshd@51-10.0.0.9:22-10.0.0.1:40048.service: Deactivated successfully. Apr 14 13:38:40.301354 systemd[1]: session-52.scope: Deactivated successfully. Apr 14 13:38:40.301795 systemd-logind[1445]: Session 52 logged out. Waiting for processes to exit. Apr 14 13:38:40.302478 systemd-logind[1445]: Removed session 52. Apr 14 13:38:45.308246 systemd[1]: Started sshd@52-10.0.0.9:22-10.0.0.1:40052.service - OpenSSH per-connection server daemon (10.0.0.1:40052). Apr 14 13:38:45.342927 sshd[7948]: Accepted publickey for core from 10.0.0.1 port 40052 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:45.344348 sshd[7948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:45.348131 systemd-logind[1445]: New session 53 of user core. Apr 14 13:38:45.353032 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 14 13:38:45.472208 sshd[7948]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:45.475280 systemd[1]: sshd@52-10.0.0.9:22-10.0.0.1:40052.service: Deactivated successfully. Apr 14 13:38:45.476948 systemd[1]: session-53.scope: Deactivated successfully. Apr 14 13:38:45.477487 systemd-logind[1445]: Session 53 logged out. Waiting for processes to exit. Apr 14 13:38:45.478312 systemd-logind[1445]: Removed session 53.