Apr 14 13:31:12.882299 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 13:31:12.882318 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:31:12.882328 kernel: BIOS-provided physical RAM map: Apr 14 13:31:12.882334 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 13:31:12.882339 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 13:31:12.882344 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 13:31:12.882350 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 13:31:12.882356 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 13:31:12.882361 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 13:31:12.882367 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 13:31:12.882372 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 13:31:12.882377 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 13:31:12.882382 kernel: NX (Execute Disable) protection: active Apr 14 13:31:12.882387 kernel: APIC: Static calls initialized Apr 14 13:31:12.882394 kernel: SMBIOS 2.8 present. Apr 14 13:31:12.882402 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 13:31:12.882407 kernel: Hypervisor detected: KVM Apr 14 13:31:12.882412 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 13:31:12.882418 kernel: kvm-clock: using sched offset of 3596384128 cycles Apr 14 13:31:12.882424 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 13:31:12.882430 kernel: tsc: Detected 2793.438 MHz processor Apr 14 13:31:12.882435 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 13:31:12.882441 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 13:31:12.882447 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 13:31:12.882454 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 13:31:12.882460 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 13:31:12.882466 kernel: Using GB pages for direct mapping Apr 14 13:31:12.882472 kernel: ACPI: Early table checksum verification disabled Apr 14 13:31:12.882477 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 13:31:12.882483 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882489 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882494 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882500 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 13:31:12.882507 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882513 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882518 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882524 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:31:12.882529 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 13:31:12.882535 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 13:31:12.882541 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 13:31:12.882548 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 13:31:12.882554 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 13:31:12.882559 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 13:31:12.882564 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 13:31:12.882569 kernel: No NUMA configuration found Apr 14 13:31:12.882574 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 13:31:12.882579 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 13:31:12.882585 kernel: Zone ranges: Apr 14 13:31:12.882590 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 13:31:12.882595 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 13:31:12.882600 kernel: Normal empty Apr 14 13:31:12.882605 kernel: Movable zone start for each node Apr 14 13:31:12.882609 kernel: Early memory node ranges Apr 14 13:31:12.882614 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 13:31:12.882619 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 13:31:12.882624 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 13:31:12.882629 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 13:31:12.882636 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 13:31:12.882641 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 13:31:12.882645 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 13:31:12.882650 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 13:31:12.882655 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 13:31:12.882660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 13:31:12.882665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 13:31:12.882670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 13:31:12.882675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 13:31:12.882681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 13:31:12.882686 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 13:31:12.882691 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 13:31:12.882696 kernel: TSC deadline timer available Apr 14 13:31:12.882701 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 13:31:12.882706 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 13:31:12.882710 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 13:31:12.882715 kernel: kvm-guest: setup PV sched yield Apr 14 13:31:12.882720 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 13:31:12.882726 kernel: Booting paravirtualized kernel on KVM Apr 14 13:31:12.882731 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 13:31:12.882736 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 13:31:12.882741 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 13:31:12.882746 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 13:31:12.882751 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 13:31:12.882756 kernel: kvm-guest: PV spinlocks enabled Apr 14 13:31:12.882761 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 13:31:12.882766 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:31:12.882773 kernel: random: crng init done Apr 14 13:31:12.882778 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 13:31:12.882783 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 13:31:12.882788 kernel: Fallback order for Node 0: 0 Apr 14 13:31:12.882793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 13:31:12.882798 kernel: Policy zone: DMA32 Apr 14 13:31:12.882803 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 13:31:12.882808 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 13:31:12.882815 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 13:31:12.882819 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 13:31:12.882824 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 13:31:12.882829 kernel: Dynamic Preempt: voluntary Apr 14 13:31:12.882834 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 13:31:12.882857 kernel: rcu: RCU event tracing is enabled. Apr 14 13:31:12.882862 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 13:31:12.882867 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 13:31:12.882872 kernel: Rude variant of Tasks RCU enabled. Apr 14 13:31:12.882877 kernel: Tracing variant of Tasks RCU enabled. Apr 14 13:31:12.882884 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 13:31:12.882889 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 13:31:12.882894 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 13:31:12.882899 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 13:31:12.882904 kernel: Console: colour VGA+ 80x25 Apr 14 13:31:12.882909 kernel: printk: console [ttyS0] enabled Apr 14 13:31:12.882913 kernel: ACPI: Core revision 20230628 Apr 14 13:31:12.882919 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 13:31:12.882924 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 13:31:12.882930 kernel: x2apic enabled Apr 14 13:31:12.882935 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 13:31:12.882940 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 13:31:12.882945 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 13:31:12.882950 kernel: kvm-guest: setup PV IPIs Apr 14 13:31:12.882955 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 13:31:12.882960 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:31:12.882971 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 13:31:12.882977 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 13:31:12.882982 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 13:31:12.882987 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 13:31:12.882995 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 13:31:12.883000 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 13:31:12.883005 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 13:31:12.883011 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 13:31:12.883017 kernel: RETBleed: Vulnerable Apr 14 13:31:12.883023 kernel: Speculative Store Bypass: Vulnerable Apr 14 13:31:12.883029 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 13:31:12.883034 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 13:31:12.883040 kernel: active return thunk: its_return_thunk Apr 14 13:31:12.883045 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 13:31:12.883051 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 13:31:12.883056 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 13:31:12.883061 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 13:31:12.883067 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 13:31:12.883074 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 13:31:12.883079 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 13:31:12.883085 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 13:31:12.883090 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 13:31:12.883095 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 13:31:12.883101 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 13:31:12.883106 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 13:31:12.883112 kernel: Freeing SMP alternatives memory: 32K Apr 14 13:31:12.883117 kernel: pid_max: default: 32768 minimum: 301 Apr 14 13:31:12.883124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 13:31:12.883130 kernel: landlock: Up and running. Apr 14 13:31:12.883184 kernel: SELinux: Initializing. Apr 14 13:31:12.883189 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.883195 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.883200 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 13:31:12.883206 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:31:12.883211 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:31:12.883219 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:31:12.883224 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 13:31:12.883230 kernel: signal: max sigframe size: 3632 Apr 14 13:31:12.883235 kernel: rcu: Hierarchical SRCU implementation. Apr 14 13:31:12.883241 kernel: rcu: Max phase no-delay instances is 400. Apr 14 13:31:12.883246 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 13:31:12.883252 kernel: smp: Bringing up secondary CPUs ... Apr 14 13:31:12.883257 kernel: smpboot: x86: Booting SMP configuration: Apr 14 13:31:12.883262 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 13:31:12.883268 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 13:31:12.883275 kernel: smpboot: Max logical packages: 1 Apr 14 13:31:12.883280 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 13:31:12.883286 kernel: devtmpfs: initialized Apr 14 13:31:12.883291 kernel: x86/mm: Memory block size: 128MB Apr 14 13:31:12.883297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 13:31:12.883302 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.883308 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 13:31:12.883314 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 13:31:12.883319 kernel: audit: initializing netlink subsys (disabled) Apr 14 13:31:12.883326 kernel: audit: type=2000 audit(1776173472.119:1): state=initialized audit_enabled=0 res=1 Apr 14 13:31:12.883331 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 13:31:12.883337 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 13:31:12.883342 kernel: cpuidle: using governor menu Apr 14 13:31:12.883348 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 13:31:12.883353 kernel: dca service started, version 1.12.1 Apr 14 13:31:12.883358 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 13:31:12.883364 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 13:31:12.883369 kernel: PCI: Using configuration type 1 for base access Apr 14 13:31:12.883377 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 13:31:12.883382 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 13:31:12.883387 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 13:31:12.883393 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 13:31:12.883398 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 13:31:12.883404 kernel: ACPI: Added _OSI(Module Device) Apr 14 13:31:12.883409 kernel: ACPI: Added _OSI(Processor Device) Apr 14 13:31:12.883415 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 13:31:12.883422 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 13:31:12.883427 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 13:31:12.883433 kernel: ACPI: Interpreter enabled Apr 14 13:31:12.883438 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 13:31:12.883443 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 13:31:12.883449 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 13:31:12.883454 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 13:31:12.883460 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 13:31:12.883465 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 13:31:12.883569 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 13:31:12.883633 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 13:31:12.883688 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 13:31:12.883695 kernel: PCI host bridge to bus 0000:00 Apr 14 13:31:12.883752 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 13:31:12.883804 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 13:31:12.883875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 13:31:12.883929 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 13:31:12.883978 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 13:31:12.884027 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 13:31:12.884076 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 13:31:12.884170 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 13:31:12.884235 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 13:31:12.884295 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 13:31:12.884351 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 13:31:12.884411 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 13:31:12.884466 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 13:31:12.884527 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 13:31:12.884584 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 13:31:12.884640 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 13:31:12.884698 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 13:31:12.884759 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 13:31:12.884814 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 13:31:12.884888 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 13:31:12.884946 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 13:31:12.885011 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 13:31:12.885066 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 13:31:12.885122 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 13:31:12.885206 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 13:31:12.885261 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 13:31:12.885321 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 13:31:12.885377 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 13:31:12.885438 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 13:31:12.885493 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 13:31:12.885551 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 13:31:12.885609 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 13:31:12.885665 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 13:31:12.885672 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 13:31:12.885678 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 13:31:12.885684 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 13:31:12.885689 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 13:31:12.885697 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 13:31:12.885702 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 13:31:12.885707 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 13:31:12.885713 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 13:31:12.885718 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 13:31:12.885724 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 13:31:12.885729 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 13:31:12.885735 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 13:31:12.885740 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 13:31:12.885747 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 13:31:12.885752 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 13:31:12.885758 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 13:31:12.885763 kernel: iommu: Default domain type: Translated Apr 14 13:31:12.885769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 13:31:12.885774 kernel: PCI: Using ACPI for IRQ routing Apr 14 13:31:12.885780 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 13:31:12.885785 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 13:31:12.885791 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 13:31:12.885865 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 13:31:12.885922 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 13:31:12.885977 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 13:31:12.885984 kernel: vgaarb: loaded Apr 14 13:31:12.885990 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 13:31:12.885996 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 13:31:12.886001 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 13:31:12.886007 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 13:31:12.886012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 13:31:12.886020 kernel: pnp: PnP ACPI init Apr 14 13:31:12.886082 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 13:31:12.886093 kernel: pnp: PnP ACPI: found 6 devices Apr 14 13:31:12.886099 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 13:31:12.886105 kernel: NET: Registered PF_INET protocol family Apr 14 13:31:12.886111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 13:31:12.886116 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 13:31:12.886122 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 13:31:12.886129 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 13:31:12.886384 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 13:31:12.886392 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 13:31:12.886398 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.886404 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:31:12.886409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 13:31:12.886415 kernel: NET: Registered PF_XDP protocol family Apr 14 13:31:12.886546 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 13:31:12.886616 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 13:31:12.886669 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 13:31:12.886718 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 13:31:12.886767 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 13:31:12.886815 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 13:31:12.886822 kernel: PCI: CLS 0 bytes, default 64 Apr 14 13:31:12.886828 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 13:31:12.886834 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:31:12.886873 kernel: Initialise system trusted keyrings Apr 14 13:31:12.886883 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 13:31:12.886888 kernel: Key type asymmetric registered Apr 14 13:31:12.886894 kernel: Asymmetric key parser 'x509' registered Apr 14 13:31:12.886899 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 13:31:12.886905 kernel: io scheduler mq-deadline registered Apr 14 13:31:12.886910 kernel: io scheduler kyber registered Apr 14 13:31:12.886916 kernel: io scheduler bfq registered Apr 14 13:31:12.886922 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 13:31:12.886928 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 13:31:12.886935 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 13:31:12.886940 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 13:31:12.886946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 13:31:12.886951 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 13:31:12.886957 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 13:31:12.886963 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 13:31:12.886968 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 13:31:12.887036 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 13:31:12.887046 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 13:31:12.887096 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 13:31:12.887174 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T13:31:12 UTC (1776173472) Apr 14 13:31:12.887226 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 13:31:12.887233 kernel: intel_pstate: CPU model not supported Apr 14 13:31:12.887239 kernel: NET: Registered PF_INET6 protocol family Apr 14 13:31:12.887245 kernel: Segment Routing with IPv6 Apr 14 13:31:12.887250 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 13:31:12.887256 kernel: NET: Registered PF_PACKET protocol family Apr 14 13:31:12.887264 kernel: Key type dns_resolver registered Apr 14 13:31:12.887269 kernel: IPI shorthand broadcast: enabled Apr 14 13:31:12.887275 kernel: sched_clock: Marking stable (784007045, 201438337)->(1037721532, -52276150) Apr 14 13:31:12.887280 kernel: registered taskstats version 1 Apr 14 13:31:12.887286 kernel: Loading compiled-in X.509 certificates Apr 14 13:31:12.887291 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 13:31:12.887297 kernel: Key type .fscrypt registered Apr 14 13:31:12.887302 kernel: Key type fscrypt-provisioning registered Apr 14 13:31:12.887308 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 13:31:12.887317 kernel: ima: Allocated hash algorithm: sha1 Apr 14 13:31:12.887322 kernel: ima: No architecture policies found Apr 14 13:31:12.887328 kernel: clk: Disabling unused clocks Apr 14 13:31:12.887333 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 13:31:12.887339 kernel: Write protecting the kernel read-only data: 36864k Apr 14 13:31:12.887344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 13:31:12.887350 kernel: Run /init as init process Apr 14 13:31:12.887355 kernel: with arguments: Apr 14 13:31:12.887360 kernel: /init Apr 14 13:31:12.887367 kernel: with environment: Apr 14 13:31:12.887373 kernel: HOME=/ Apr 14 13:31:12.887378 kernel: TERM=linux Apr 14 13:31:12.887386 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:31:12.887394 systemd[1]: Detected virtualization kvm. Apr 14 13:31:12.887400 systemd[1]: Detected architecture x86-64. Apr 14 13:31:12.887406 systemd[1]: Running in initrd. Apr 14 13:31:12.887411 systemd[1]: No hostname configured, using default hostname. Apr 14 13:31:12.887419 systemd[1]: Hostname set to . Apr 14 13:31:12.887425 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:31:12.887431 systemd[1]: Queued start job for default target initrd.target. Apr 14 13:31:12.887436 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:31:12.887442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:31:12.887448 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 13:31:12.887455 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:31:12.887460 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 13:31:12.887469 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 13:31:12.887484 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 13:31:12.887491 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 13:31:12.887497 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:31:12.887505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:31:12.887511 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:31:12.887518 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:31:12.887524 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:31:12.887530 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:31:12.887536 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:31:12.887542 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:31:12.887548 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:31:12.887554 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:31:12.887561 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:31:12.887567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:31:12.887574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:31:12.887581 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:31:12.887587 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 13:31:12.887594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:31:12.887600 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 13:31:12.887606 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 13:31:12.887612 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:31:12.887619 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:31:12.887638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:12.887644 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 13:31:12.887650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:31:12.887657 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 13:31:12.887705 systemd-journald[193]: Collecting audit messages is disabled. Apr 14 13:31:12.887741 systemd-journald[193]: Journal started Apr 14 13:31:12.887772 systemd-journald[193]: Runtime Journal (/run/log/journal/f908f8157ae648bdb08e259a137e26bb) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:31:12.891448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:31:12.889711 systemd-modules-load[194]: Inserted module 'overlay' Apr 14 13:31:12.983568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 13:31:12.983591 kernel: Bridge firewalling registered Apr 14 13:31:12.913727 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 14 13:31:12.987874 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:31:12.990421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:31:12.992208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:12.995503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:31:13.010361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:31:13.011184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:31:13.012455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:31:13.020112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:31:13.025630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:31:13.029685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:31:13.033336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:31:13.042312 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:31:13.043937 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:13.048202 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 13:31:13.060563 dracut-cmdline[232]: dracut-dracut-053 Apr 14 13:31:13.064039 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:31:13.068331 systemd-resolved[230]: Positive Trust Anchors: Apr 14 13:31:13.068338 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:31:13.068363 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:31:13.070617 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 14 13:31:13.071376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:31:13.073453 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:31:13.131378 kernel: SCSI subsystem initialized Apr 14 13:31:13.140298 kernel: Loading iSCSI transport class v2.0-870. Apr 14 13:31:13.151174 kernel: iscsi: registered transport (tcp) Apr 14 13:31:13.170316 kernel: iscsi: registered transport (qla4xxx) Apr 14 13:31:13.170432 kernel: QLogic iSCSI HBA Driver Apr 14 13:31:13.204764 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 13:31:13.217600 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 13:31:13.239425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 13:31:13.239459 kernel: device-mapper: uevent: version 1.0.3 Apr 14 13:31:13.239483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 13:31:13.276326 kernel: raid6: avx512x4 gen() 44649 MB/s Apr 14 13:31:13.293285 kernel: raid6: avx512x2 gen() 43394 MB/s Apr 14 13:31:13.310299 kernel: raid6: avx512x1 gen() 43681 MB/s Apr 14 13:31:13.327181 kernel: raid6: avx2x4 gen() 37942 MB/s Apr 14 13:31:13.344280 kernel: raid6: avx2x2 gen() 37754 MB/s Apr 14 13:31:13.361943 kernel: raid6: avx2x1 gen() 29661 MB/s Apr 14 13:31:13.362021 kernel: raid6: using algorithm avx512x4 gen() 44649 MB/s Apr 14 13:31:13.379977 kernel: raid6: .... xor() 9342 MB/s, rmw enabled Apr 14 13:31:13.380099 kernel: raid6: using avx512x2 recovery algorithm Apr 14 13:31:13.398313 kernel: xor: automatically using best checksumming function avx Apr 14 13:31:13.538438 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 13:31:13.551219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:31:13.568802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:31:13.579364 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 14 13:31:13.582418 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:31:13.585786 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 13:31:13.603115 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Apr 14 13:31:13.632544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:31:13.646606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:31:13.677443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:31:13.688896 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 13:31:13.700092 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 13:31:13.703810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:31:13.713512 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 13:31:13.705586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:31:13.707356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:31:13.721172 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 13:31:13.721326 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 13:31:13.725667 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 13:31:13.728571 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 13:31:13.728651 kernel: GPT:9289727 != 19775487 Apr 14 13:31:13.728659 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 13:31:13.729535 kernel: GPT:9289727 != 19775487 Apr 14 13:31:13.731718 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 13:31:13.731750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:13.731697 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:31:13.741618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:31:13.743317 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:13.754537 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 13:31:13.754564 kernel: AES CTR mode by8 optimization enabled Apr 14 13:31:13.745601 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:31:13.750094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:31:13.750290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:13.752325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:13.774252 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (467) Apr 14 13:31:13.774333 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 14 13:31:13.769498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:13.781165 kernel: libata version 3.00 loaded. Apr 14 13:31:13.787253 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 13:31:13.787552 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 13:31:13.788866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 13:31:13.881837 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 13:31:13.882118 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 13:31:13.882388 kernel: scsi host0: ahci Apr 14 13:31:13.882487 kernel: scsi host1: ahci Apr 14 13:31:13.882562 kernel: scsi host2: ahci Apr 14 13:31:13.882633 kernel: scsi host3: ahci Apr 14 13:31:13.882699 kernel: scsi host4: ahci Apr 14 13:31:13.882763 kernel: scsi host5: ahci Apr 14 13:31:13.882827 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 13:31:13.882838 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 13:31:13.882869 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 13:31:13.882876 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 13:31:13.882883 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 13:31:13.882890 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 13:31:13.884261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:13.893387 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 13:31:13.900788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 13:31:13.900906 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 13:31:13.915085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:31:13.930658 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 13:31:13.933257 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:31:13.939383 disk-uuid[561]: Primary Header is updated. Apr 14 13:31:13.939383 disk-uuid[561]: Secondary Entries is updated. Apr 14 13:31:13.939383 disk-uuid[561]: Secondary Header is updated. Apr 14 13:31:13.942174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:13.958039 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:14.110428 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.110566 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.112289 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.115303 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 13:31:14.115389 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.117263 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 13:31:14.118178 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 13:31:14.120155 kernel: ata3.00: applying bridge limits Apr 14 13:31:14.122180 kernel: ata3.00: configured for UDMA/100 Apr 14 13:31:14.122197 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 13:31:14.165770 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 13:31:14.166219 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 13:31:14.181184 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 13:31:14.951181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:31:14.951743 disk-uuid[563]: The operation has completed successfully. Apr 14 13:31:14.972986 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 13:31:14.973091 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 13:31:14.994666 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 13:31:15.000582 sh[597]: Success Apr 14 13:31:15.014204 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 13:31:15.043419 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 13:31:15.053524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 13:31:15.055445 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 13:31:15.068169 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 13:31:15.068225 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:15.070407 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 13:31:15.070470 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 13:31:15.072522 kernel: BTRFS info (device dm-0): using free space tree Apr 14 13:31:15.078936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 13:31:15.080679 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 13:31:15.093357 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 13:31:15.095426 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 13:31:15.105045 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.105205 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:15.105214 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:31:15.109642 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:31:15.116112 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 13:31:15.120324 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.126321 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 13:31:15.132547 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 13:31:15.186679 ignition[692]: Ignition 2.19.0 Apr 14 13:31:15.186693 ignition[692]: Stage: fetch-offline Apr 14 13:31:15.186724 ignition[692]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.187247 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.187399 ignition[692]: parsed url from cmdline: "" Apr 14 13:31:15.187401 ignition[692]: no config URL provided Apr 14 13:31:15.187406 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 13:31:15.187414 ignition[692]: no config at "/usr/lib/ignition/user.ign" Apr 14 13:31:15.187435 ignition[692]: op(1): [started] loading QEMU firmware config module Apr 14 13:31:15.200934 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:31:15.187438 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 13:31:15.197311 ignition[692]: op(1): [finished] loading QEMU firmware config module Apr 14 13:31:15.215690 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:31:15.234592 systemd-networkd[785]: lo: Link UP Apr 14 13:31:15.234617 systemd-networkd[785]: lo: Gained carrier Apr 14 13:31:15.237783 systemd-networkd[785]: Enumeration completed Apr 14 13:31:15.239340 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:15.239352 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:31:15.245386 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:31:15.249088 systemd[1]: Reached target network.target - Network. Apr 14 13:31:15.252375 systemd-networkd[785]: eth0: Link UP Apr 14 13:31:15.252497 systemd-networkd[785]: eth0: Gained carrier Apr 14 13:31:15.252511 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:15.279397 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:31:15.321791 ignition[692]: parsing config with SHA512: a5836a3d96e166f2402889351fd5d48c248c60d02c08c9c17e2b81a79850c5c3ba0f73dc2760e97cc841797d83cded521fb3ad837f5f6d14c12e90542dc636d6 Apr 14 13:31:15.324874 unknown[692]: fetched base config from "system" Apr 14 13:31:15.325081 unknown[692]: fetched user config from "qemu" Apr 14 13:31:15.325389 ignition[692]: fetch-offline: fetch-offline passed Apr 14 13:31:15.325434 ignition[692]: Ignition finished successfully Apr 14 13:31:15.330734 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:31:15.332757 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 13:31:15.345386 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 13:31:15.358000 ignition[789]: Ignition 2.19.0 Apr 14 13:31:15.358025 ignition[789]: Stage: kargs Apr 14 13:31:15.358245 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.360783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 13:31:15.358257 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.359297 ignition[789]: kargs: kargs passed Apr 14 13:31:15.359348 ignition[789]: Ignition finished successfully Apr 14 13:31:15.376470 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 13:31:15.385779 ignition[798]: Ignition 2.19.0 Apr 14 13:31:15.385785 ignition[798]: Stage: disks Apr 14 13:31:15.386010 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.388427 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 13:31:15.386022 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.390973 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 13:31:15.386900 ignition[798]: disks: disks passed Apr 14 13:31:15.392646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:31:15.386934 ignition[798]: Ignition finished successfully Apr 14 13:31:15.392725 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:31:15.393410 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:31:15.393626 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:31:15.410172 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 13:31:15.421304 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.10 Apr 14 13:31:15.421386 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Apr 14 13:31:15.422102 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 13:31:15.429460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 13:31:15.443352 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 13:31:15.523075 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 13:31:15.525825 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 13:31:15.523640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 13:31:15.534315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:31:15.537868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 13:31:15.538095 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 13:31:15.538124 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 13:31:15.555723 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 14 13:31:15.555750 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.555761 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:15.555769 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:31:15.555779 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:31:15.538165 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:31:15.559719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:31:15.563911 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 13:31:15.569352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 13:31:15.604332 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 13:31:15.608260 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 14 13:31:15.611925 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 13:31:15.616181 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 13:31:15.684621 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 13:31:15.691317 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 13:31:15.695500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 13:31:15.699934 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:15.714661 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 13:31:15.722524 ignition[930]: INFO : Ignition 2.19.0 Apr 14 13:31:15.722524 ignition[930]: INFO : Stage: mount Apr 14 13:31:15.725408 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:15.725408 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:15.725408 ignition[930]: INFO : mount: mount passed Apr 14 13:31:15.725408 ignition[930]: INFO : Ignition finished successfully Apr 14 13:31:15.724816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 13:31:15.739352 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 13:31:16.067636 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 13:31:16.084698 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:31:16.096374 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 14 13:31:16.099534 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:31:16.099564 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:31:16.099573 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:31:16.104180 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:31:16.106357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:31:16.130173 ignition[961]: INFO : Ignition 2.19.0 Apr 14 13:31:16.130173 ignition[961]: INFO : Stage: files Apr 14 13:31:16.130173 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:16.130173 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:16.138955 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 14 13:31:16.138955 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 13:31:16.138955 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 13:31:16.138955 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 13:31:16.138955 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 13:31:16.138955 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 13:31:16.138955 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:31:16.138955 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 13:31:16.134033 unknown[961]: wrote ssh authorized keys file for user: core Apr 14 13:31:16.264339 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 13:31:16.470594 systemd-networkd[785]: eth0: Gained IPv6LL Apr 14 13:31:16.496480 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:31:16.496480 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 13:31:16.496480 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:31:16.504718 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 13:31:16.876384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 13:31:17.147985 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:31:17.147985 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 13:31:17.152748 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:31:17.155562 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:31:17.155562 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 13:31:17.155562 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 13:31:17.161976 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:31:17.161976 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:31:17.161976 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 13:31:17.161976 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 13:31:17.176824 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:31:17.180369 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:31:17.182540 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 13:31:17.182540 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 13:31:17.182540 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 13:31:17.182540 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:31:17.182540 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:31:17.182540 ignition[961]: INFO : files: files passed Apr 14 13:31:17.182540 ignition[961]: INFO : Ignition finished successfully Apr 14 13:31:17.189236 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 13:31:17.203342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 13:31:17.207843 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 13:31:17.208125 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 13:31:17.208227 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 13:31:17.220051 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 13:31:17.223198 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:31:17.223198 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:31:17.228077 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:31:17.229411 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:31:17.230454 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 13:31:17.246384 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 13:31:17.267677 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 13:31:17.267789 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 13:31:17.271124 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 13:31:17.272897 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 13:31:17.279407 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 13:31:17.287311 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 13:31:17.298777 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:31:17.302642 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 13:31:17.314233 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:31:17.316438 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:31:17.319723 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 13:31:17.322769 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 13:31:17.322936 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:31:17.327949 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 13:31:17.332003 systemd[1]: Stopped target basic.target - Basic System. Apr 14 13:31:17.333106 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 13:31:17.337098 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:31:17.340606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 13:31:17.343685 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 13:31:17.346716 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:31:17.349966 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 13:31:17.353559 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 13:31:17.356455 systemd[1]: Stopped target swap.target - Swaps. Apr 14 13:31:17.357780 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 13:31:17.357914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:31:17.366736 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:31:17.368564 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:31:17.371730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 13:31:17.373769 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:31:17.375975 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 13:31:17.376162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 13:31:17.382255 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 13:31:17.382361 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:31:17.385375 systemd[1]: Stopped target paths.target - Path Units. Apr 14 13:31:17.388074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 13:31:17.393230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:31:17.393445 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 13:31:17.399119 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 13:31:17.401828 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 13:31:17.401915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:31:17.405120 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 13:31:17.405236 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:31:17.407498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 13:31:17.407592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:31:17.411935 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 13:31:17.412023 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 13:31:17.431422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 13:31:17.434981 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 13:31:17.436445 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 13:31:17.436608 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:31:17.440327 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 13:31:17.440446 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:31:17.450116 ignition[1016]: INFO : Ignition 2.19.0 Apr 14 13:31:17.450116 ignition[1016]: INFO : Stage: umount Apr 14 13:31:17.450116 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:31:17.450116 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:31:17.450116 ignition[1016]: INFO : umount: umount passed Apr 14 13:31:17.450116 ignition[1016]: INFO : Ignition finished successfully Apr 14 13:31:17.445679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 13:31:17.445761 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 13:31:17.453661 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 13:31:17.453749 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 13:31:17.455347 systemd[1]: Stopped target network.target - Network. Apr 14 13:31:17.455579 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 13:31:17.455611 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 13:31:17.455829 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 13:31:17.455851 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 13:31:17.456062 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 13:31:17.456084 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 13:31:17.456530 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 13:31:17.456557 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 13:31:17.456845 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 13:31:17.457015 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 13:31:17.458413 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 13:31:17.473446 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 13:31:17.473622 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 13:31:17.475523 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 14 13:31:17.476093 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 13:31:17.476468 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:31:17.480552 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 13:31:17.480695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 13:31:17.483971 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 13:31:17.484014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:31:17.494532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 13:31:17.498198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 13:31:17.498280 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:31:17.500131 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 13:31:17.500220 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:31:17.503556 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 13:31:17.503610 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 13:31:17.511637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:31:17.521801 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 13:31:17.521951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:31:17.526068 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 13:31:17.526208 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 13:31:17.528773 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 13:31:17.528855 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 13:31:17.532010 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 13:31:17.532058 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 13:31:17.533009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 13:31:17.533038 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:31:17.536981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 13:31:17.537020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:31:17.541113 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 13:31:17.541171 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 13:31:17.546037 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:31:17.546081 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:31:17.550791 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 13:31:17.550836 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 13:31:17.573671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 13:31:17.576791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 13:31:17.576855 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:31:17.583673 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 13:31:17.583722 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:31:17.588824 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 13:31:17.588902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:31:17.591755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:31:17.591789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:17.595213 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 13:31:17.595285 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 13:31:17.599235 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 13:31:17.613294 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 13:31:17.618496 systemd[1]: Switching root. Apr 14 13:31:17.649404 systemd-journald[193]: Journal stopped Apr 14 13:31:18.437041 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 14 13:31:18.437100 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 13:31:18.437112 kernel: SELinux: policy capability open_perms=1 Apr 14 13:31:18.437120 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 13:31:18.437128 kernel: SELinux: policy capability always_check_network=0 Apr 14 13:31:18.437386 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 13:31:18.437397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 13:31:18.437405 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 13:31:18.437412 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 13:31:18.437421 kernel: audit: type=1403 audit(1776173477.800:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 13:31:18.437434 systemd[1]: Successfully loaded SELinux policy in 38.229ms. Apr 14 13:31:18.439259 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.073ms. Apr 14 13:31:18.439278 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:31:18.439288 systemd[1]: Detected virtualization kvm. Apr 14 13:31:18.439298 systemd[1]: Detected architecture x86-64. Apr 14 13:31:18.439307 systemd[1]: Detected first boot. Apr 14 13:31:18.439316 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:31:18.439325 zram_generator::config[1060]: No configuration found. Apr 14 13:31:18.439338 systemd[1]: Populated /etc with preset unit settings. Apr 14 13:31:18.439347 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 13:31:18.439355 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 13:31:18.439363 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 13:31:18.439372 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 13:31:18.439381 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 13:31:18.439388 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 13:31:18.439399 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 13:31:18.439409 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 13:31:18.439419 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 13:31:18.439426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 13:31:18.439434 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 13:31:18.439442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:31:18.439450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:31:18.439458 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 13:31:18.439467 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 13:31:18.439475 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 13:31:18.439485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:31:18.439493 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 13:31:18.439501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:31:18.439508 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 13:31:18.439516 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 13:31:18.439524 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 13:31:18.439531 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 13:31:18.439539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:31:18.439549 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:31:18.439556 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:31:18.439564 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:31:18.439571 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 13:31:18.439580 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 13:31:18.439587 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:31:18.439595 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:31:18.439603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:31:18.439611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 13:31:18.439620 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 13:31:18.439628 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 13:31:18.439637 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 13:31:18.439644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:18.439652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 13:31:18.439660 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 13:31:18.439668 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 13:31:18.439676 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 13:31:18.439686 systemd[1]: Reached target machines.target - Containers. Apr 14 13:31:18.439694 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 13:31:18.439702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:31:18.439709 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:31:18.439717 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 13:31:18.439724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:31:18.439732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:31:18.439744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:31:18.439756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 13:31:18.439772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:31:18.439783 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 13:31:18.439790 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 13:31:18.439798 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 13:31:18.439806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 13:31:18.439813 kernel: fuse: init (API version 7.39) Apr 14 13:31:18.439822 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 13:31:18.439829 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:31:18.439836 kernel: loop: module loaded Apr 14 13:31:18.439846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:31:18.439854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 13:31:18.439887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 13:31:18.439919 systemd-journald[1144]: Collecting audit messages is disabled. Apr 14 13:31:18.439944 kernel: ACPI: bus type drm_connector registered Apr 14 13:31:18.439952 systemd-journald[1144]: Journal started Apr 14 13:31:18.439972 systemd-journald[1144]: Runtime Journal (/run/log/journal/f908f8157ae648bdb08e259a137e26bb) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:31:18.174615 systemd[1]: Queued start job for default target multi-user.target. Apr 14 13:31:18.192046 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 13:31:18.192438 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 13:31:18.443507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:31:18.445803 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 13:31:18.445841 systemd[1]: Stopped verity-setup.service. Apr 14 13:31:18.447195 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:18.453660 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:31:18.454222 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 13:31:18.455746 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 13:31:18.457305 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 13:31:18.458718 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 13:31:18.460266 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 13:31:18.461840 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 13:31:18.463381 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 13:31:18.465186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:31:18.467040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 13:31:18.467185 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 13:31:18.468945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:31:18.469048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:31:18.470809 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:31:18.470947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:31:18.472523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:31:18.472632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:31:18.474505 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 13:31:18.474633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 13:31:18.476600 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:31:18.476715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:31:18.478432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:31:18.480203 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 13:31:18.482107 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 13:31:18.490801 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:31:18.494785 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 13:31:18.503456 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 13:31:18.506161 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 13:31:18.507749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 13:31:18.507781 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:31:18.510213 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 13:31:18.512951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 13:31:18.515516 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 13:31:18.517009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:31:18.518371 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 13:31:18.521938 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 13:31:18.523793 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:31:18.528005 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 13:31:18.529711 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:31:18.531901 systemd-journald[1144]: Time spent on flushing to /var/log/journal/f908f8157ae648bdb08e259a137e26bb is 22.080ms for 951 entries. Apr 14 13:31:18.531901 systemd-journald[1144]: System Journal (/var/log/journal/f908f8157ae648bdb08e259a137e26bb) is 8.0M, max 195.6M, 187.6M free. Apr 14 13:31:18.572571 systemd-journald[1144]: Received client request to flush runtime journal. Apr 14 13:31:18.572610 kernel: loop0: detected capacity change from 0 to 142488 Apr 14 13:31:18.531272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:31:18.536446 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 13:31:18.540265 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:31:18.545291 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 13:31:18.548665 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 13:31:18.550972 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 13:31:18.553499 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 13:31:18.555976 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 13:31:18.562097 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 13:31:18.572429 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 13:31:18.574818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 13:31:18.582637 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:31:18.588602 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 13:31:18.594194 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 13:31:18.596541 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 14 13:31:18.596562 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 14 13:31:18.601306 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 13:31:18.601943 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 13:31:18.604085 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:31:18.614378 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 13:31:18.621188 kernel: loop1: detected capacity change from 0 to 140768 Apr 14 13:31:18.643271 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 13:31:18.654440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:31:18.661291 kernel: loop2: detected capacity change from 0 to 228704 Apr 14 13:31:18.676499 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 14 13:31:18.676522 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 14 13:31:18.680776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:31:18.696202 kernel: loop3: detected capacity change from 0 to 142488 Apr 14 13:31:18.708227 kernel: loop4: detected capacity change from 0 to 140768 Apr 14 13:31:18.720336 kernel: loop5: detected capacity change from 0 to 228704 Apr 14 13:31:18.727800 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 13:31:18.728886 (sd-merge)[1202]: Merged extensions into '/usr'. Apr 14 13:31:18.734686 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 13:31:18.734786 systemd[1]: Reloading... Apr 14 13:31:18.778855 zram_generator::config[1225]: No configuration found. Apr 14 13:31:18.863251 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 13:31:18.881559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:18.914805 systemd[1]: Reloading finished in 179 ms. Apr 14 13:31:18.954330 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 13:31:18.956390 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 13:31:18.972510 systemd[1]: Starting ensure-sysext.service... Apr 14 13:31:18.975215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:31:18.984249 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Apr 14 13:31:18.984265 systemd[1]: Reloading... Apr 14 13:31:19.000599 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 13:31:19.001305 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 13:31:19.002091 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 13:31:19.002669 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 14 13:31:19.002794 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 14 13:31:19.006081 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:31:19.006277 systemd-tmpfiles[1266]: Skipping /boot Apr 14 13:31:19.018846 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:31:19.019021 systemd-tmpfiles[1266]: Skipping /boot Apr 14 13:31:19.032203 zram_generator::config[1298]: No configuration found. Apr 14 13:31:19.152936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:19.215804 systemd[1]: Reloading finished in 231 ms. Apr 14 13:31:19.235117 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 13:31:19.248471 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:31:19.256268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:31:19.259107 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 13:31:19.261767 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 13:31:19.266474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:31:19.270743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:31:19.283824 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 13:31:19.289341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:19.289554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:31:19.291015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:31:19.299725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:31:19.303500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:31:19.305592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:31:19.308521 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Apr 14 13:31:19.308540 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 13:31:19.310122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:19.311601 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 13:31:19.314308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:31:19.314474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:31:19.317453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:31:19.317601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:31:19.320695 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:31:19.320850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:31:19.323717 augenrules[1356]: No rules Apr 14 13:31:19.326967 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:31:19.332541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:31:19.336090 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 13:31:19.344971 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 13:31:19.349633 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 13:31:19.365093 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 13:31:19.365559 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:19.365765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:31:19.372606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:31:19.376374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:31:19.379824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:31:19.383400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:31:19.386330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:31:19.388553 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:31:19.394602 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 13:31:19.397330 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1367) Apr 14 13:31:19.401492 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 13:31:19.401550 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:31:19.402607 systemd[1]: Finished ensure-sysext.service. Apr 14 13:31:19.405123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:31:19.405383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:31:19.408451 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:31:19.408619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:31:19.411961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:31:19.412124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:31:19.415418 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:31:19.415574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:31:19.421753 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 13:31:19.422905 systemd-resolved[1335]: Positive Trust Anchors: Apr 14 13:31:19.423238 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:31:19.423301 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:31:19.430353 systemd-resolved[1335]: Defaulting to hostname 'linux'. Apr 14 13:31:19.435576 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:31:19.443627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:31:19.446043 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:31:19.452219 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 14 13:31:19.453476 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 13:31:19.455295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:31:19.455355 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:31:19.460377 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 13:31:19.466159 kernel: ACPI: button: Power Button [PWRF] Apr 14 13:31:19.479166 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 14 13:31:19.484958 systemd-networkd[1400]: lo: Link UP Apr 14 13:31:19.484966 systemd-networkd[1400]: lo: Gained carrier Apr 14 13:31:19.486516 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 13:31:19.496936 systemd-networkd[1400]: Enumeration completed Apr 14 13:31:19.498935 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 13:31:19.507764 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 13:31:19.507955 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 13:31:19.497202 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:31:19.499161 systemd[1]: Reached target network.target - Network. Apr 14 13:31:19.508175 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:19.508179 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:31:19.508800 systemd-networkd[1400]: eth0: Link UP Apr 14 13:31:19.508803 systemd-networkd[1400]: eth0: Gained carrier Apr 14 13:31:19.508817 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:31:19.514358 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 13:31:19.516303 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 13:31:19.523649 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 13:31:19.531265 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:31:19.533430 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Apr 14 13:31:20.057442 systemd-resolved[1335]: Clock change detected. Flushing caches. Apr 14 13:31:20.057840 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 13:31:20.057881 systemd-timesyncd[1414]: Initial clock synchronization to Tue 2026-04-14 13:31:20.057402 UTC. Apr 14 13:31:20.058244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:31:20.074823 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 13:31:20.209373 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 13:31:20.230761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:31:20.247039 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 13:31:20.256775 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:31:20.284981 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 13:31:20.287586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:31:20.289155 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:31:20.290729 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 13:31:20.293107 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 13:31:20.295441 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 13:31:20.297332 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 13:31:20.299064 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 13:31:20.301340 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 13:31:20.301389 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:31:20.302665 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:31:20.304665 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 13:31:20.307486 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 13:31:20.317960 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 13:31:20.321791 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 13:31:20.323932 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 13:31:20.325461 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:31:20.326797 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:31:20.326926 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:31:20.326941 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:31:20.327921 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 13:31:20.331013 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 13:31:20.333989 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:31:20.335927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 13:31:20.338494 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 13:31:20.340767 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 13:31:20.343061 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 13:31:20.346994 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 13:31:20.347522 jq[1438]: false Apr 14 13:31:20.352953 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 13:31:20.355996 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 13:31:20.360655 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 13:31:20.362436 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 13:31:20.362878 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 13:31:20.363915 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 13:31:20.366122 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 13:31:20.368532 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 13:31:20.369699 extend-filesystems[1439]: Found loop3 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found loop4 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found loop5 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found sr0 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda1 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda2 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda3 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found usr Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda4 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda6 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda7 Apr 14 13:31:20.371425 extend-filesystems[1439]: Found vda9 Apr 14 13:31:20.371425 extend-filesystems[1439]: Checking size of /dev/vda9 Apr 14 13:31:20.382059 dbus-daemon[1437]: [system] SELinux support is enabled Apr 14 13:31:20.380233 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 13:31:20.380441 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 13:31:20.388217 jq[1452]: true Apr 14 13:31:20.383629 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 13:31:20.390105 update_engine[1451]: I20260414 13:31:20.389923 1451 main.cc:92] Flatcar Update Engine starting Apr 14 13:31:20.388072 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 13:31:20.388274 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 13:31:20.391632 update_engine[1451]: I20260414 13:31:20.391571 1451 update_check_scheduler.cc:74] Next update check in 7m21s Apr 14 13:31:20.398630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 13:31:20.398902 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 13:31:20.403365 extend-filesystems[1439]: Resized partition /dev/vda9 Apr 14 13:31:20.407176 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Apr 14 13:31:20.460038 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 13:31:20.460094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1366) Apr 14 13:31:20.455960 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 13:31:20.466228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 13:31:20.466276 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 13:31:20.468927 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 13:31:20.468968 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 13:31:20.469775 jq[1462]: true Apr 14 13:31:20.473723 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 13:31:20.473742 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 13:31:20.476133 systemd-logind[1450]: New seat seat0. Apr 14 13:31:20.483053 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 13:31:20.490369 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 13:31:20.492582 tar[1458]: linux-amd64/LICENSE Apr 14 13:31:20.493125 systemd[1]: Started update-engine.service - Update Engine. Apr 14 13:31:20.506381 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 13:31:20.510916 tar[1458]: linux-amd64/helm Apr 14 13:31:20.513389 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 13:31:20.513389 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 13:31:20.513389 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 13:31:20.520488 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Apr 14 13:31:20.514522 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 13:31:20.514694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 13:31:20.534833 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Apr 14 13:31:20.538450 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 13:31:20.540981 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 13:31:20.544130 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 13:31:20.644689 containerd[1463]: time="2026-04-14T13:31:20.644455564Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 13:31:20.671498 containerd[1463]: time="2026-04-14T13:31:20.671414822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673187 containerd[1463]: time="2026-04-14T13:31:20.673127223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673187 containerd[1463]: time="2026-04-14T13:31:20.673171882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 13:31:20.673187 containerd[1463]: time="2026-04-14T13:31:20.673189463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 13:31:20.673352 containerd[1463]: time="2026-04-14T13:31:20.673320982Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 13:31:20.673352 containerd[1463]: time="2026-04-14T13:31:20.673345982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673401 containerd[1463]: time="2026-04-14T13:31:20.673384415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673423 containerd[1463]: time="2026-04-14T13:31:20.673403017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673561 containerd[1463]: time="2026-04-14T13:31:20.673523714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673592 containerd[1463]: time="2026-04-14T13:31:20.673563396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673592 containerd[1463]: time="2026-04-14T13:31:20.673573703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673592 containerd[1463]: time="2026-04-14T13:31:20.673580270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673649 containerd[1463]: time="2026-04-14T13:31:20.673633796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673837 containerd[1463]: time="2026-04-14T13:31:20.673792769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673953 containerd[1463]: time="2026-04-14T13:31:20.673935830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:31:20.673976 containerd[1463]: time="2026-04-14T13:31:20.673953945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 13:31:20.674034 containerd[1463]: time="2026-04-14T13:31:20.674018487Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 13:31:20.674074 containerd[1463]: time="2026-04-14T13:31:20.674059953Z" level=info msg="metadata content store policy set" policy=shared Apr 14 13:31:20.680821 containerd[1463]: time="2026-04-14T13:31:20.680677930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 13:31:20.680821 containerd[1463]: time="2026-04-14T13:31:20.680743348Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 13:31:20.680821 containerd[1463]: time="2026-04-14T13:31:20.680759133Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 13:31:20.680821 containerd[1463]: time="2026-04-14T13:31:20.680783762Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 13:31:20.680821 containerd[1463]: time="2026-04-14T13:31:20.680796926Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 13:31:20.680962 containerd[1463]: time="2026-04-14T13:31:20.680937179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 13:31:20.681209 containerd[1463]: time="2026-04-14T13:31:20.681174657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 13:31:20.681290 containerd[1463]: time="2026-04-14T13:31:20.681262744Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 13:31:20.681290 containerd[1463]: time="2026-04-14T13:31:20.681286343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 13:31:20.681317 containerd[1463]: time="2026-04-14T13:31:20.681295851Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 13:31:20.681317 containerd[1463]: time="2026-04-14T13:31:20.681307142Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681346 containerd[1463]: time="2026-04-14T13:31:20.681316495Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681346 containerd[1463]: time="2026-04-14T13:31:20.681326000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681346 containerd[1463]: time="2026-04-14T13:31:20.681337437Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681381 containerd[1463]: time="2026-04-14T13:31:20.681348430Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681381 containerd[1463]: time="2026-04-14T13:31:20.681359143Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681381 containerd[1463]: time="2026-04-14T13:31:20.681367881Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681422 containerd[1463]: time="2026-04-14T13:31:20.681402481Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 13:31:20.681436 containerd[1463]: time="2026-04-14T13:31:20.681420529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681436 containerd[1463]: time="2026-04-14T13:31:20.681431532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681464 containerd[1463]: time="2026-04-14T13:31:20.681440640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681464 containerd[1463]: time="2026-04-14T13:31:20.681451756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681464 containerd[1463]: time="2026-04-14T13:31:20.681460299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681502 containerd[1463]: time="2026-04-14T13:31:20.681469674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681502 containerd[1463]: time="2026-04-14T13:31:20.681478676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681502 containerd[1463]: time="2026-04-14T13:31:20.681487563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681502 containerd[1463]: time="2026-04-14T13:31:20.681497169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681596 containerd[1463]: time="2026-04-14T13:31:20.681513987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681596 containerd[1463]: time="2026-04-14T13:31:20.681524811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681596 containerd[1463]: time="2026-04-14T13:31:20.681533406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681596 containerd[1463]: time="2026-04-14T13:31:20.681567653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681596 containerd[1463]: time="2026-04-14T13:31:20.681580102Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 13:31:20.681658 containerd[1463]: time="2026-04-14T13:31:20.681600585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681658 containerd[1463]: time="2026-04-14T13:31:20.681610801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681658 containerd[1463]: time="2026-04-14T13:31:20.681618345Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 13:31:20.681658 containerd[1463]: time="2026-04-14T13:31:20.681652750Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 13:31:20.681709 containerd[1463]: time="2026-04-14T13:31:20.681665842Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 13:31:20.681709 containerd[1463]: time="2026-04-14T13:31:20.681674208Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 13:31:20.681709 containerd[1463]: time="2026-04-14T13:31:20.681682804Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 13:31:20.681709 containerd[1463]: time="2026-04-14T13:31:20.681689428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681709 containerd[1463]: time="2026-04-14T13:31:20.681698644Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 13:31:20.681709 containerd[1463]: time="2026-04-14T13:31:20.681709029Z" level=info msg="NRI interface is disabled by configuration." Apr 14 13:31:20.681784 containerd[1463]: time="2026-04-14T13:31:20.681716636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 13:31:20.681984 containerd[1463]: time="2026-04-14T13:31:20.681941266Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 13:31:20.682122 containerd[1463]: time="2026-04-14T13:31:20.681990735Z" level=info msg="Connect containerd service" Apr 14 13:31:20.682122 containerd[1463]: time="2026-04-14T13:31:20.682022182Z" level=info msg="using legacy CRI server" Apr 14 13:31:20.682122 containerd[1463]: time="2026-04-14T13:31:20.682027182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 13:31:20.683679 containerd[1463]: time="2026-04-14T13:31:20.683538138Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 13:31:20.684245 containerd[1463]: time="2026-04-14T13:31:20.684136821Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:31:20.684407 containerd[1463]: time="2026-04-14T13:31:20.684289943Z" level=info msg="Start subscribing containerd event" Apr 14 13:31:20.684498 containerd[1463]: time="2026-04-14T13:31:20.684480722Z" level=info msg="Start recovering state" Apr 14 13:31:20.684575 containerd[1463]: time="2026-04-14T13:31:20.684558686Z" level=info msg="Start event monitor" Apr 14 13:31:20.684575 containerd[1463]: time="2026-04-14T13:31:20.684573520Z" level=info msg="Start snapshots syncer" Apr 14 13:31:20.684612 containerd[1463]: time="2026-04-14T13:31:20.684580499Z" level=info msg="Start cni network conf syncer for default" Apr 14 13:31:20.684612 containerd[1463]: time="2026-04-14T13:31:20.684603353Z" level=info msg="Start streaming server" Apr 14 13:31:20.689537 containerd[1463]: time="2026-04-14T13:31:20.687004873Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 13:31:20.689537 containerd[1463]: time="2026-04-14T13:31:20.687047914Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 13:31:20.689537 containerd[1463]: time="2026-04-14T13:31:20.687105527Z" level=info msg="containerd successfully booted in 0.043831s" Apr 14 13:31:20.687201 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 13:31:20.724231 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 13:31:20.745237 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 13:31:20.758267 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 13:31:20.763646 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 13:31:20.763830 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 13:31:20.768274 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 13:31:20.781138 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 13:31:20.789616 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 13:31:20.792656 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 13:31:20.794501 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 13:31:20.934030 tar[1458]: linux-amd64/README.md Apr 14 13:31:20.949022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 13:31:21.153721 systemd-networkd[1400]: eth0: Gained IPv6LL Apr 14 13:31:21.156607 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 13:31:21.158966 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 13:31:21.175769 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 13:31:21.179135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:21.181569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 13:31:21.195750 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 13:31:21.195930 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 13:31:21.197937 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 13:31:21.201382 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 13:31:21.890156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:21.892382 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 13:31:21.896910 systemd[1]: Startup finished in 922ms (kernel) + 5.103s (initrd) + 3.609s (userspace) = 9.636s. Apr 14 13:31:21.947250 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:31:22.394200 kubelet[1550]: E0414 13:31:22.394070 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:31:22.396458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:31:22.396626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:31:26.620727 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 13:31:26.629340 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:38000.service - OpenSSH per-connection server daemon (10.0.0.1:38000). Apr 14 13:31:26.673398 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 38000 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:26.674968 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:26.681733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 13:31:26.697451 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 13:31:26.699007 systemd-logind[1450]: New session 1 of user core. Apr 14 13:31:26.707219 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 13:31:26.709054 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 13:31:26.717066 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 13:31:26.824232 systemd[1568]: Queued start job for default target default.target. Apr 14 13:31:26.833015 systemd[1568]: Created slice app.slice - User Application Slice. Apr 14 13:31:26.833053 systemd[1568]: Reached target paths.target - Paths. Apr 14 13:31:26.833062 systemd[1568]: Reached target timers.target - Timers. Apr 14 13:31:26.836247 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 13:31:26.847911 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 13:31:26.848044 systemd[1568]: Reached target sockets.target - Sockets. Apr 14 13:31:26.848074 systemd[1568]: Reached target basic.target - Basic System. Apr 14 13:31:26.848124 systemd[1568]: Reached target default.target - Main User Target. Apr 14 13:31:26.848154 systemd[1568]: Startup finished in 125ms. Apr 14 13:31:26.848218 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 13:31:26.849261 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 13:31:26.920099 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:38004.service - OpenSSH per-connection server daemon (10.0.0.1:38004). Apr 14 13:31:26.958474 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 38004 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:26.960000 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:26.964661 systemd-logind[1450]: New session 2 of user core. Apr 14 13:31:26.973292 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 13:31:27.025800 sshd[1579]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.043451 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:38004.service: Deactivated successfully. Apr 14 13:31:27.044569 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 13:31:27.046271 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Apr 14 13:31:27.047211 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:38014.service - OpenSSH per-connection server daemon (10.0.0.1:38014). Apr 14 13:31:27.047699 systemd-logind[1450]: Removed session 2. Apr 14 13:31:27.081179 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 38014 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.082417 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.086784 systemd-logind[1450]: New session 3 of user core. Apr 14 13:31:27.095987 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 13:31:27.146068 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.156934 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:38014.service: Deactivated successfully. Apr 14 13:31:27.158236 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 13:31:27.159794 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Apr 14 13:31:27.161877 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:38016.service - OpenSSH per-connection server daemon (10.0.0.1:38016). Apr 14 13:31:27.162536 systemd-logind[1450]: Removed session 3. Apr 14 13:31:27.196785 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 38016 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.198012 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.202344 systemd-logind[1450]: New session 4 of user core. Apr 14 13:31:27.213443 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 13:31:27.272111 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.290082 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:38016.service: Deactivated successfully. Apr 14 13:31:27.292009 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 13:31:27.293266 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Apr 14 13:31:27.312555 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:38020.service - OpenSSH per-connection server daemon (10.0.0.1:38020). Apr 14 13:31:27.313605 systemd-logind[1450]: Removed session 4. Apr 14 13:31:27.344633 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 38020 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.345886 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.352281 systemd-logind[1450]: New session 5 of user core. Apr 14 13:31:27.366097 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 13:31:27.427452 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 13:31:27.427697 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:27.448515 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 14 13:31:27.450619 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.466119 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:38020.service: Deactivated successfully. Apr 14 13:31:27.467966 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 13:31:27.469009 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Apr 14 13:31:27.475367 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:38024.service - OpenSSH per-connection server daemon (10.0.0.1:38024). Apr 14 13:31:27.476215 systemd-logind[1450]: Removed session 5. Apr 14 13:31:27.507606 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 38024 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.509712 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.514602 systemd-logind[1450]: New session 6 of user core. Apr 14 13:31:27.524291 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 13:31:27.579135 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 13:31:27.579351 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:27.582494 sudo[1612]: pam_unix(sudo:session): session closed for user root Apr 14 13:31:27.590037 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 13:31:27.590321 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:27.607385 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 13:31:27.610551 auditctl[1615]: No rules Apr 14 13:31:27.612678 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 13:31:27.613290 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 13:31:27.626180 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:31:27.663939 augenrules[1634]: No rules Apr 14 13:31:27.665404 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:31:27.666383 sudo[1611]: pam_unix(sudo:session): session closed for user root Apr 14 13:31:27.667865 sshd[1608]: pam_unix(sshd:session): session closed for user core Apr 14 13:31:27.678759 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:38024.service: Deactivated successfully. Apr 14 13:31:27.679946 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 13:31:27.680890 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Apr 14 13:31:27.687209 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:38040.service - OpenSSH per-connection server daemon (10.0.0.1:38040). Apr 14 13:31:27.688570 systemd-logind[1450]: Removed session 6. Apr 14 13:31:27.716797 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 38040 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:31:27.717883 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:31:27.721427 systemd-logind[1450]: New session 7 of user core. Apr 14 13:31:27.735016 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 13:31:27.787793 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 13:31:27.788058 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:31:28.044247 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 13:31:28.044571 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 13:31:28.286436 dockerd[1663]: time="2026-04-14T13:31:28.286348399Z" level=info msg="Starting up" Apr 14 13:31:28.496173 dockerd[1663]: time="2026-04-14T13:31:28.495998202Z" level=info msg="Loading containers: start." Apr 14 13:31:28.611875 kernel: Initializing XFRM netlink socket Apr 14 13:31:28.692098 systemd-networkd[1400]: docker0: Link UP Apr 14 13:31:28.715910 dockerd[1663]: time="2026-04-14T13:31:28.715770264Z" level=info msg="Loading containers: done." Apr 14 13:31:28.733917 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1517104030-merged.mount: Deactivated successfully. Apr 14 13:31:28.734428 dockerd[1663]: time="2026-04-14T13:31:28.734369751Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 13:31:28.734782 dockerd[1663]: time="2026-04-14T13:31:28.734615049Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 13:31:28.735064 dockerd[1663]: time="2026-04-14T13:31:28.735020012Z" level=info msg="Daemon has completed initialization" Apr 14 13:31:28.781472 dockerd[1663]: time="2026-04-14T13:31:28.781321386Z" level=info msg="API listen on /run/docker.sock" Apr 14 13:31:28.781731 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 13:31:29.223423 containerd[1463]: time="2026-04-14T13:31:29.223294895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 13:31:29.707142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934945053.mount: Deactivated successfully. Apr 14 13:31:30.442785 containerd[1463]: time="2026-04-14T13:31:30.442703096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:30.443571 containerd[1463]: time="2026-04-14T13:31:30.443533774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 13:31:30.444638 containerd[1463]: time="2026-04-14T13:31:30.444582622Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:30.446849 containerd[1463]: time="2026-04-14T13:31:30.446787591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:30.447863 containerd[1463]: time="2026-04-14T13:31:30.447827698Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 1.224456496s" Apr 14 13:31:30.447922 containerd[1463]: time="2026-04-14T13:31:30.447869947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 13:31:30.448501 containerd[1463]: time="2026-04-14T13:31:30.448443025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 13:31:31.378240 containerd[1463]: time="2026-04-14T13:31:31.378122270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:31.378798 containerd[1463]: time="2026-04-14T13:31:31.378766751Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 13:31:31.380246 containerd[1463]: time="2026-04-14T13:31:31.380096575Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:31.382554 containerd[1463]: time="2026-04-14T13:31:31.382505010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:31.383481 containerd[1463]: time="2026-04-14T13:31:31.383449047Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 934.986067ms" Apr 14 13:31:31.383539 containerd[1463]: time="2026-04-14T13:31:31.383482128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 13:31:31.384016 containerd[1463]: time="2026-04-14T13:31:31.383989136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 13:31:32.156588 containerd[1463]: time="2026-04-14T13:31:32.156407091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:32.157267 containerd[1463]: time="2026-04-14T13:31:32.157230167Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 13:31:32.158401 containerd[1463]: time="2026-04-14T13:31:32.158365041Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:32.161693 containerd[1463]: time="2026-04-14T13:31:32.161640424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:32.162978 containerd[1463]: time="2026-04-14T13:31:32.162909245Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 778.878476ms" Apr 14 13:31:32.162978 containerd[1463]: time="2026-04-14T13:31:32.162955882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 13:31:32.165253 containerd[1463]: time="2026-04-14T13:31:32.165227293Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 13:31:32.516758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 13:31:32.527068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:32.679152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:32.693145 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:31:32.737568 kubelet[1889]: E0414 13:31:32.737521 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:31:32.741044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:31:32.741202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:31:33.013255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485962492.mount: Deactivated successfully. Apr 14 13:31:33.426241 containerd[1463]: time="2026-04-14T13:31:33.425982790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:33.427289 containerd[1463]: time="2026-04-14T13:31:33.427240402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 13:31:33.428261 containerd[1463]: time="2026-04-14T13:31:33.428223961Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:33.430647 containerd[1463]: time="2026-04-14T13:31:33.430586240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:33.431261 containerd[1463]: time="2026-04-14T13:31:33.431211277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.265951031s" Apr 14 13:31:33.431261 containerd[1463]: time="2026-04-14T13:31:33.431245396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 13:31:33.432044 containerd[1463]: time="2026-04-14T13:31:33.431962413Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 13:31:33.811546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896657565.mount: Deactivated successfully. Apr 14 13:31:34.506443 containerd[1463]: time="2026-04-14T13:31:34.506347742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.507306 containerd[1463]: time="2026-04-14T13:31:34.507251349Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 13:31:34.508234 containerd[1463]: time="2026-04-14T13:31:34.508180887Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.512368 containerd[1463]: time="2026-04-14T13:31:34.512239446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.513211 containerd[1463]: time="2026-04-14T13:31:34.513052087Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.081053265s" Apr 14 13:31:34.513211 containerd[1463]: time="2026-04-14T13:31:34.513100169Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 13:31:34.514403 containerd[1463]: time="2026-04-14T13:31:34.514327877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 13:31:34.910209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700607967.mount: Deactivated successfully. Apr 14 13:31:34.916793 containerd[1463]: time="2026-04-14T13:31:34.916527678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.917608 containerd[1463]: time="2026-04-14T13:31:34.917547759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 13:31:34.918889 containerd[1463]: time="2026-04-14T13:31:34.918868597Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.924670 containerd[1463]: time="2026-04-14T13:31:34.924432152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:34.925346 containerd[1463]: time="2026-04-14T13:31:34.925309097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 410.955302ms" Apr 14 13:31:34.925394 containerd[1463]: time="2026-04-14T13:31:34.925351663Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 13:31:34.927886 containerd[1463]: time="2026-04-14T13:31:34.927690477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 13:31:35.357401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645205098.mount: Deactivated successfully. Apr 14 13:31:36.013656 containerd[1463]: time="2026-04-14T13:31:36.013585902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:36.014344 containerd[1463]: time="2026-04-14T13:31:36.014282190Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 13:31:36.015425 containerd[1463]: time="2026-04-14T13:31:36.015381552Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:36.018077 containerd[1463]: time="2026-04-14T13:31:36.018046857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:31:36.018875 containerd[1463]: time="2026-04-14T13:31:36.018842948Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.091088484s" Apr 14 13:31:36.018875 containerd[1463]: time="2026-04-14T13:31:36.018876516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 13:31:38.223845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:38.238072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:38.257064 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Apr 14 13:31:38.257085 systemd[1]: Reloading... Apr 14 13:31:38.310874 zram_generator::config[2088]: No configuration found. Apr 14 13:31:38.402309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:38.456105 systemd[1]: Reloading finished in 198 ms. Apr 14 13:31:38.507191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:38.510967 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:31:38.511187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:38.525146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:38.715446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:38.720339 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:31:38.910168 kubelet[2141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:31:38.910168 kubelet[2141]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:31:38.910168 kubelet[2141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:31:38.910168 kubelet[2141]: I0414 13:31:38.909831 2141 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:31:39.673951 kubelet[2141]: I0414 13:31:39.673889 2141 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 13:31:39.673951 kubelet[2141]: I0414 13:31:39.673928 2141 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:31:39.674188 kubelet[2141]: I0414 13:31:39.674157 2141 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:31:39.702398 kubelet[2141]: I0414 13:31:39.702328 2141 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:31:39.705485 kubelet[2141]: E0414 13:31:39.705289 2141 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:31:39.708849 kubelet[2141]: E0414 13:31:39.708688 2141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:31:39.709043 kubelet[2141]: I0414 13:31:39.708876 2141 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 13:31:39.720702 kubelet[2141]: I0414 13:31:39.720611 2141 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 13:31:39.721017 kubelet[2141]: I0414 13:31:39.720936 2141 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:31:39.721280 kubelet[2141]: I0414 13:31:39.720980 2141 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 13:31:39.721280 kubelet[2141]: I0414 13:31:39.721248 2141 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:31:39.721280 kubelet[2141]: I0414 13:31:39.721260 2141 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 13:31:39.721472 kubelet[2141]: I0414 13:31:39.721405 2141 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:39.729135 kubelet[2141]: I0414 13:31:39.729035 2141 kubelet.go:480] "Attempting to sync node with API server" Apr 14 13:31:39.729135 kubelet[2141]: I0414 13:31:39.729091 2141 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:31:39.729135 kubelet[2141]: I0414 13:31:39.729135 2141 kubelet.go:386] "Adding apiserver pod source" Apr 14 13:31:39.729412 kubelet[2141]: I0414 13:31:39.729159 2141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:31:39.734429 kubelet[2141]: I0414 13:31:39.734360 2141 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:31:39.735113 kubelet[2141]: I0414 13:31:39.735072 2141 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:31:39.737861 kubelet[2141]: E0414 13:31:39.737576 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:31:39.737861 kubelet[2141]: E0414 13:31:39.737571 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:31:39.737861 kubelet[2141]: W0414 13:31:39.737685 2141 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 13:31:39.745779 kubelet[2141]: I0414 13:31:39.745725 2141 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 13:31:39.745923 kubelet[2141]: I0414 13:31:39.745833 2141 server.go:1289] "Started kubelet" Apr 14 13:31:39.747834 kubelet[2141]: I0414 13:31:39.746007 2141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:31:39.747834 kubelet[2141]: I0414 13:31:39.746365 2141 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:31:39.747834 kubelet[2141]: I0414 13:31:39.746406 2141 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:31:39.747834 kubelet[2141]: I0414 13:31:39.747226 2141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:31:39.747834 kubelet[2141]: I0414 13:31:39.747614 2141 server.go:317] "Adding debug handlers to kubelet server" Apr 14 13:31:39.749693 kubelet[2141]: E0414 13:31:39.749562 2141 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:31:39.749881 kubelet[2141]: I0414 13:31:39.749862 2141 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 13:31:39.750253 kubelet[2141]: I0414 13:31:39.750207 2141 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 13:31:39.750286 kubelet[2141]: I0414 13:31:39.750279 2141 reconciler.go:26] "Reconciler: start to sync state" Apr 14 13:31:39.750788 kubelet[2141]: E0414 13:31:39.750659 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:31:39.752964 kubelet[2141]: E0414 13:31:39.748975 2141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63c632fcee1b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:31:39.745755577 +0000 UTC m=+1.021256121,LastTimestamp:2026-04-14 13:31:39.745755577 +0000 UTC m=+1.021256121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:31:39.753193 kubelet[2141]: E0414 13:31:39.753148 2141 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:31:39.754772 kubelet[2141]: I0414 13:31:39.753913 2141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:31:39.754772 kubelet[2141]: E0414 13:31:39.754406 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Apr 14 13:31:39.754772 kubelet[2141]: I0414 13:31:39.754462 2141 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:31:39.754772 kubelet[2141]: I0414 13:31:39.754558 2141 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:31:39.755901 kubelet[2141]: I0414 13:31:39.755852 2141 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:31:39.778025 kubelet[2141]: I0414 13:31:39.777933 2141 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 13:31:39.778595 kubelet[2141]: I0414 13:31:39.778569 2141 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:31:39.778595 kubelet[2141]: I0414 13:31:39.778590 2141 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:31:39.778683 kubelet[2141]: I0414 13:31:39.778606 2141 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:39.780110 kubelet[2141]: I0414 13:31:39.779862 2141 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 13:31:39.780110 kubelet[2141]: I0414 13:31:39.779883 2141 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 13:31:39.780110 kubelet[2141]: I0414 13:31:39.779903 2141 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:31:39.780110 kubelet[2141]: I0414 13:31:39.779909 2141 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 13:31:39.780110 kubelet[2141]: E0414 13:31:39.779940 2141 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:31:39.782895 kubelet[2141]: E0414 13:31:39.782870 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:31:39.786464 kubelet[2141]: I0414 13:31:39.786412 2141 policy_none.go:49] "None policy: Start" Apr 14 13:31:39.786464 kubelet[2141]: I0414 13:31:39.786451 2141 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 13:31:39.786464 kubelet[2141]: I0414 13:31:39.786462 2141 state_mem.go:35] "Initializing new in-memory state store" Apr 14 13:31:39.793781 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 13:31:39.819575 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 13:31:39.824459 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 13:31:39.835899 kubelet[2141]: E0414 13:31:39.835800 2141 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:31:39.836104 kubelet[2141]: I0414 13:31:39.836023 2141 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:31:39.836104 kubelet[2141]: I0414 13:31:39.836035 2141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:31:39.836333 kubelet[2141]: I0414 13:31:39.836302 2141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:31:39.837214 kubelet[2141]: E0414 13:31:39.837172 2141 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:31:39.837292 kubelet[2141]: E0414 13:31:39.837229 2141 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:31:39.899285 systemd[1]: Created slice kubepods-burstable-pode6a01249a97f49501a987efc322eaa4c.slice - libcontainer container kubepods-burstable-pode6a01249a97f49501a987efc322eaa4c.slice. Apr 14 13:31:39.961970 kernel: hrtimer: interrupt took 16216413 ns Apr 14 13:31:39.962695 kubelet[2141]: E0414 13:31:39.959836 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Apr 14 13:31:39.967743 kubelet[2141]: I0414 13:31:39.967702 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:39.970195 kubelet[2141]: E0414 13:31:39.968867 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 14 13:31:39.971991 kubelet[2141]: E0414 13:31:39.971968 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:39.976303 systemd[1]: Created slice kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice - libcontainer container kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice. Apr 14 13:31:39.979407 kubelet[2141]: E0414 13:31:39.979332 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:39.981995 systemd[1]: Created slice kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice - libcontainer container kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice. Apr 14 13:31:39.983214 kubelet[2141]: E0414 13:31:39.983178 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:40.061686 kubelet[2141]: I0414 13:31:40.061494 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:40.061686 kubelet[2141]: I0414 13:31:40.061664 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:40.063157 kubelet[2141]: I0414 13:31:40.061742 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:40.063157 kubelet[2141]: I0414 13:31:40.061793 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:40.063157 kubelet[2141]: I0414 13:31:40.061828 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6a01249a97f49501a987efc322eaa4c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a01249a97f49501a987efc322eaa4c\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:40.063989 kubelet[2141]: I0414 13:31:40.063280 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6a01249a97f49501a987efc322eaa4c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6a01249a97f49501a987efc322eaa4c\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:40.064219 kubelet[2141]: I0414 13:31:40.064120 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:40.064300 kubelet[2141]: I0414 13:31:40.064242 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:40.064327 kubelet[2141]: I0414 13:31:40.064306 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6a01249a97f49501a987efc322eaa4c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a01249a97f49501a987efc322eaa4c\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:40.172412 kubelet[2141]: I0414 13:31:40.172337 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:40.172947 kubelet[2141]: E0414 13:31:40.172887 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 14 13:31:40.274958 kubelet[2141]: E0414 13:31:40.274888 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:40.276304 containerd[1463]: time="2026-04-14T13:31:40.276074543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6a01249a97f49501a987efc322eaa4c,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:40.280830 kubelet[2141]: E0414 13:31:40.280765 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:40.281387 containerd[1463]: time="2026-04-14T13:31:40.281351517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:40.284769 kubelet[2141]: E0414 13:31:40.284728 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:40.285883 containerd[1463]: time="2026-04-14T13:31:40.285844811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:40.361577 kubelet[2141]: E0414 13:31:40.361287 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Apr 14 13:31:40.575248 kubelet[2141]: I0414 13:31:40.575141 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:40.575884 kubelet[2141]: E0414 13:31:40.575793 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 14 13:31:40.737385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365997807.mount: Deactivated successfully. Apr 14 13:31:40.748077 containerd[1463]: time="2026-04-14T13:31:40.747919435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:40.748572 containerd[1463]: time="2026-04-14T13:31:40.748527278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 13:31:40.749867 containerd[1463]: time="2026-04-14T13:31:40.749773477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:40.750405 containerd[1463]: time="2026-04-14T13:31:40.750380124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:40.751249 containerd[1463]: time="2026-04-14T13:31:40.751218693Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:40.752120 containerd[1463]: time="2026-04-14T13:31:40.751877218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:31:40.752786 containerd[1463]: time="2026-04-14T13:31:40.752753263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:31:40.754377 containerd[1463]: time="2026-04-14T13:31:40.754324657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:31:40.755742 containerd[1463]: time="2026-04-14T13:31:40.755703687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.283344ms" Apr 14 13:31:40.756754 containerd[1463]: time="2026-04-14T13:31:40.756706426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.343194ms" Apr 14 13:31:40.757581 containerd[1463]: time="2026-04-14T13:31:40.757543509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.606815ms" Apr 14 13:31:40.847183 kubelet[2141]: E0414 13:31:40.846893 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:31:40.943284 kubelet[2141]: E0414 13:31:40.943205 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:31:41.051824 kubelet[2141]: E0414 13:31:41.051742 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:31:41.165488 kubelet[2141]: E0414 13:31:41.165136 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Apr 14 13:31:41.170864 kubelet[2141]: E0414 13:31:41.170749 2141 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:31:41.378540 kubelet[2141]: I0414 13:31:41.378460 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:41.379042 kubelet[2141]: E0414 13:31:41.378987 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Apr 14 13:31:41.454106 containerd[1463]: time="2026-04-14T13:31:41.453568599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:41.454106 containerd[1463]: time="2026-04-14T13:31:41.453758428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:41.454106 containerd[1463]: time="2026-04-14T13:31:41.453777782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:41.454106 containerd[1463]: time="2026-04-14T13:31:41.453919446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:41.462499 containerd[1463]: time="2026-04-14T13:31:41.462167927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:41.462499 containerd[1463]: time="2026-04-14T13:31:41.462368396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:41.462499 containerd[1463]: time="2026-04-14T13:31:41.462390952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:41.462756 containerd[1463]: time="2026-04-14T13:31:41.462471247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:41.563464 containerd[1463]: time="2026-04-14T13:31:41.561752253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:41.563464 containerd[1463]: time="2026-04-14T13:31:41.561907487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:41.563464 containerd[1463]: time="2026-04-14T13:31:41.561917329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:41.563464 containerd[1463]: time="2026-04-14T13:31:41.562036481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:41.571325 systemd[1]: Started cri-containerd-5402ebbb74d66a3d85c14614b6b7053d9c29f206afeff88e153f668b73dd201f.scope - libcontainer container 5402ebbb74d66a3d85c14614b6b7053d9c29f206afeff88e153f668b73dd201f. Apr 14 13:31:41.573862 systemd[1]: Started cri-containerd-9c5d276824e4e0814cd3fd7c0c77569297100c6b19e8ce785046268a43cd0306.scope - libcontainer container 9c5d276824e4e0814cd3fd7c0c77569297100c6b19e8ce785046268a43cd0306. Apr 14 13:31:41.597968 systemd[1]: Started cri-containerd-29321245cf50f16c50358f04a99bbf0cb9f5ba031072719c2c77fc19cc9e12ba.scope - libcontainer container 29321245cf50f16c50358f04a99bbf0cb9f5ba031072719c2c77fc19cc9e12ba. Apr 14 13:31:41.764132 containerd[1463]: time="2026-04-14T13:31:41.763999720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c5d276824e4e0814cd3fd7c0c77569297100c6b19e8ce785046268a43cd0306\"" Apr 14 13:31:41.770564 kubelet[2141]: E0414 13:31:41.770516 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:41.779319 containerd[1463]: time="2026-04-14T13:31:41.779061870Z" level=info msg="CreateContainer within sandbox \"9c5d276824e4e0814cd3fd7c0c77569297100c6b19e8ce785046268a43cd0306\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 13:31:41.780938 containerd[1463]: time="2026-04-14T13:31:41.780133033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5402ebbb74d66a3d85c14614b6b7053d9c29f206afeff88e153f668b73dd201f\"" Apr 14 13:31:41.781702 kubelet[2141]: E0414 13:31:41.781617 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:41.787157 containerd[1463]: time="2026-04-14T13:31:41.787115316Z" level=info msg="CreateContainer within sandbox \"5402ebbb74d66a3d85c14614b6b7053d9c29f206afeff88e153f668b73dd201f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 13:31:41.794824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153970807.mount: Deactivated successfully. Apr 14 13:31:41.797439 containerd[1463]: time="2026-04-14T13:31:41.797402038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6a01249a97f49501a987efc322eaa4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"29321245cf50f16c50358f04a99bbf0cb9f5ba031072719c2c77fc19cc9e12ba\"" Apr 14 13:31:41.798863 kubelet[2141]: E0414 13:31:41.798838 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:41.812430 containerd[1463]: time="2026-04-14T13:31:41.811997694Z" level=info msg="CreateContainer within sandbox \"9c5d276824e4e0814cd3fd7c0c77569297100c6b19e8ce785046268a43cd0306\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8c0866d0dee6723dbbcc488e957b49f74c79b899d821ca164b93703aad46205\"" Apr 14 13:31:41.815575 containerd[1463]: time="2026-04-14T13:31:41.815481235Z" level=info msg="StartContainer for \"d8c0866d0dee6723dbbcc488e957b49f74c79b899d821ca164b93703aad46205\"" Apr 14 13:31:41.819872 containerd[1463]: time="2026-04-14T13:31:41.819739021Z" level=info msg="CreateContainer within sandbox \"29321245cf50f16c50358f04a99bbf0cb9f5ba031072719c2c77fc19cc9e12ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 13:31:41.825116 containerd[1463]: time="2026-04-14T13:31:41.824745594Z" level=info msg="CreateContainer within sandbox \"5402ebbb74d66a3d85c14614b6b7053d9c29f206afeff88e153f668b73dd201f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"35d6694ce5449c7fd8e9d29e63fbaa9c2831d5ec63075b9df725b367404beacb\"" Apr 14 13:31:41.832911 containerd[1463]: time="2026-04-14T13:31:41.832753526Z" level=info msg="StartContainer for \"35d6694ce5449c7fd8e9d29e63fbaa9c2831d5ec63075b9df725b367404beacb\"" Apr 14 13:31:41.863729 kubelet[2141]: E0414 13:31:41.863687 2141 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:31:41.873257 containerd[1463]: time="2026-04-14T13:31:41.873185851Z" level=info msg="CreateContainer within sandbox \"29321245cf50f16c50358f04a99bbf0cb9f5ba031072719c2c77fc19cc9e12ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb147165cd6649a8c353a3ef0dbcdd82eab992b47080c38ed815f5c1efacf8af\"" Apr 14 13:31:41.874248 containerd[1463]: time="2026-04-14T13:31:41.874061278Z" level=info msg="StartContainer for \"eb147165cd6649a8c353a3ef0dbcdd82eab992b47080c38ed815f5c1efacf8af\"" Apr 14 13:31:41.895163 systemd[1]: Started cri-containerd-d8c0866d0dee6723dbbcc488e957b49f74c79b899d821ca164b93703aad46205.scope - libcontainer container d8c0866d0dee6723dbbcc488e957b49f74c79b899d821ca164b93703aad46205. Apr 14 13:31:41.898954 systemd[1]: Started cri-containerd-35d6694ce5449c7fd8e9d29e63fbaa9c2831d5ec63075b9df725b367404beacb.scope - libcontainer container 35d6694ce5449c7fd8e9d29e63fbaa9c2831d5ec63075b9df725b367404beacb. Apr 14 13:31:41.900080 systemd[1]: Started cri-containerd-eb147165cd6649a8c353a3ef0dbcdd82eab992b47080c38ed815f5c1efacf8af.scope - libcontainer container eb147165cd6649a8c353a3ef0dbcdd82eab992b47080c38ed815f5c1efacf8af. Apr 14 13:31:41.970962 containerd[1463]: time="2026-04-14T13:31:41.970914952Z" level=info msg="StartContainer for \"d8c0866d0dee6723dbbcc488e957b49f74c79b899d821ca164b93703aad46205\" returns successfully" Apr 14 13:31:41.987336 containerd[1463]: time="2026-04-14T13:31:41.986405162Z" level=info msg="StartContainer for \"35d6694ce5449c7fd8e9d29e63fbaa9c2831d5ec63075b9df725b367404beacb\" returns successfully" Apr 14 13:31:41.987336 containerd[1463]: time="2026-04-14T13:31:41.986417291Z" level=info msg="StartContainer for \"eb147165cd6649a8c353a3ef0dbcdd82eab992b47080c38ed815f5c1efacf8af\" returns successfully" Apr 14 13:31:42.736847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511685229.mount: Deactivated successfully. Apr 14 13:31:42.982218 kubelet[2141]: I0414 13:31:42.982106 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:42.989491 kubelet[2141]: E0414 13:31:42.989382 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:42.993210 kubelet[2141]: E0414 13:31:42.989928 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:42.995973 kubelet[2141]: E0414 13:31:42.995929 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:42.996196 kubelet[2141]: E0414 13:31:42.996075 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:42.998107 kubelet[2141]: E0414 13:31:42.998076 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:42.998261 kubelet[2141]: E0414 13:31:42.998203 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:44.010709 kubelet[2141]: E0414 13:31:44.010634 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:44.011172 kubelet[2141]: E0414 13:31:44.010891 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:44.012913 kubelet[2141]: E0414 13:31:44.011769 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:44.012913 kubelet[2141]: E0414 13:31:44.012076 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:44.012913 kubelet[2141]: E0414 13:31:44.012165 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:44.012913 kubelet[2141]: E0414 13:31:44.012409 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:45.008933 kubelet[2141]: E0414 13:31:45.008890 2141 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:31:45.009174 kubelet[2141]: E0414 13:31:45.009052 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:46.077632 kubelet[2141]: E0414 13:31:46.077583 2141 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 13:31:46.162857 kubelet[2141]: I0414 13:31:46.162761 2141 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:31:46.162857 kubelet[2141]: E0414 13:31:46.162839 2141 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 13:31:46.236192 kubelet[2141]: E0414 13:31:46.235703 2141 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a63c632fcee1b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:31:39.745755577 +0000 UTC m=+1.021256121,LastTimestamp:2026-04-14 13:31:39.745755577 +0000 UTC m=+1.021256121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:31:46.255242 kubelet[2141]: I0414 13:31:46.254721 2141 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:46.662632 kubelet[2141]: I0414 13:31:46.648081 2141 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:46.673647 kubelet[2141]: E0414 13:31:46.673548 2141 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:46.673647 kubelet[2141]: I0414 13:31:46.673607 2141 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:46.677097 kubelet[2141]: E0414 13:31:46.676142 2141 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:46.677097 kubelet[2141]: E0414 13:31:46.676357 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:46.679557 kubelet[2141]: E0414 13:31:46.678623 2141 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:46.679557 kubelet[2141]: I0414 13:31:46.678644 2141 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:46.689709 kubelet[2141]: E0414 13:31:46.689609 2141 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:46.827226 kubelet[2141]: I0414 13:31:46.827151 2141 apiserver.go:52] "Watching apiserver" Apr 14 13:31:46.851094 kubelet[2141]: I0414 13:31:46.850627 2141 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 13:31:48.826549 kubelet[2141]: I0414 13:31:48.826321 2141 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:48.840790 kubelet[2141]: E0414 13:31:48.840447 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:49.030701 kubelet[2141]: E0414 13:31:49.030629 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:49.833872 kubelet[2141]: I0414 13:31:49.833163 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8331221640000002 podStartE2EDuration="1.833122164s" podCreationTimestamp="2026-04-14 13:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:31:49.833022248 +0000 UTC m=+11.108522797" watchObservedRunningTime="2026-04-14 13:31:49.833122164 +0000 UTC m=+11.108622714" Apr 14 13:31:50.185147 kubelet[2141]: I0414 13:31:50.181521 2141 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:50.193782 kubelet[2141]: E0414 13:31:50.193734 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:50.690029 systemd[1]: Reloading requested from client PID 2436 ('systemctl') (unit session-7.scope)... Apr 14 13:31:50.690057 systemd[1]: Reloading... Apr 14 13:31:50.861109 zram_generator::config[2475]: No configuration found. Apr 14 13:31:51.004333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:31:51.071136 kubelet[2141]: E0414 13:31:51.069616 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:51.135248 systemd[1]: Reloading finished in 444 ms. Apr 14 13:31:51.180403 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:51.180767 kubelet[2141]: I0414 13:31:51.180517 2141 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:31:51.200370 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:31:51.200578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:51.200625 systemd[1]: kubelet.service: Consumed 3.070s CPU time, 135.4M memory peak, 0B memory swap peak. Apr 14 13:31:51.205434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:31:51.369899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:31:51.376215 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:31:51.656798 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:31:51.657309 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:31:51.657392 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:31:51.657543 kubelet[2519]: I0414 13:31:51.657498 2519 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:31:51.670734 kubelet[2519]: I0414 13:31:51.670672 2519 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 13:31:51.670734 kubelet[2519]: I0414 13:31:51.670729 2519 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:31:51.670988 kubelet[2519]: I0414 13:31:51.670942 2519 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:31:51.674857 kubelet[2519]: I0414 13:31:51.674761 2519 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 13:31:51.680053 kubelet[2519]: I0414 13:31:51.679681 2519 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:31:51.690415 kubelet[2519]: E0414 13:31:51.690367 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:31:51.690673 kubelet[2519]: I0414 13:31:51.690406 2519 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 13:31:51.697542 kubelet[2519]: I0414 13:31:51.696925 2519 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 13:31:51.697542 kubelet[2519]: I0414 13:31:51.697134 2519 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:31:51.697542 kubelet[2519]: I0414 13:31:51.697216 2519 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 13:31:51.697542 kubelet[2519]: I0414 13:31:51.697392 2519 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:31:51.697928 kubelet[2519]: I0414 13:31:51.697399 2519 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 13:31:51.697928 kubelet[2519]: I0414 13:31:51.697441 2519 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:51.697928 kubelet[2519]: I0414 13:31:51.697616 2519 kubelet.go:480] "Attempting to sync node with API server" Apr 14 13:31:51.697928 kubelet[2519]: I0414 13:31:51.697625 2519 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:31:51.697928 kubelet[2519]: I0414 13:31:51.697645 2519 kubelet.go:386] "Adding apiserver pod source" Apr 14 13:31:51.697928 kubelet[2519]: I0414 13:31:51.697655 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:31:51.699525 kubelet[2519]: I0414 13:31:51.699481 2519 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:31:51.702082 kubelet[2519]: I0414 13:31:51.700047 2519 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:31:51.715361 kubelet[2519]: I0414 13:31:51.713062 2519 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 13:31:51.715361 kubelet[2519]: I0414 13:31:51.713578 2519 server.go:1289] "Started kubelet" Apr 14 13:31:51.715361 kubelet[2519]: I0414 13:31:51.715293 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:31:51.724967 kubelet[2519]: I0414 13:31:51.724905 2519 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 13:31:51.725109 kubelet[2519]: E0414 13:31:51.725094 2519 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:31:51.727950 kubelet[2519]: I0414 13:31:51.725273 2519 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 13:31:51.727950 kubelet[2519]: I0414 13:31:51.725407 2519 reconciler.go:26] "Reconciler: start to sync state" Apr 14 13:31:51.727950 kubelet[2519]: I0414 13:31:51.725572 2519 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:31:51.727950 kubelet[2519]: I0414 13:31:51.725651 2519 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:31:51.727950 kubelet[2519]: I0414 13:31:51.725736 2519 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:31:51.731853 kubelet[2519]: I0414 13:31:51.731755 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:31:51.733353 kubelet[2519]: E0414 13:31:51.733296 2519 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:31:51.734997 kubelet[2519]: I0414 13:31:51.734967 2519 server.go:317] "Adding debug handlers to kubelet server" Apr 14 13:31:51.739647 kubelet[2519]: I0414 13:31:51.739561 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:31:51.740943 kubelet[2519]: I0414 13:31:51.740069 2519 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:31:51.748131 kubelet[2519]: I0414 13:31:51.748103 2519 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:31:51.753135 kubelet[2519]: I0414 13:31:51.753101 2519 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 13:31:51.757712 kubelet[2519]: I0414 13:31:51.757642 2519 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 13:31:51.757712 kubelet[2519]: I0414 13:31:51.757682 2519 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 13:31:51.758195 kubelet[2519]: I0414 13:31:51.757728 2519 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:31:51.758195 kubelet[2519]: I0414 13:31:51.757737 2519 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 13:31:51.760407 kubelet[2519]: E0414 13:31:51.759253 2519 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:31:51.848775 kubelet[2519]: I0414 13:31:51.848678 2519 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:31:51.848775 kubelet[2519]: I0414 13:31:51.848722 2519 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:31:51.848775 kubelet[2519]: I0414 13:31:51.848742 2519 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:31:51.849004 kubelet[2519]: I0414 13:31:51.848935 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 13:31:51.849004 kubelet[2519]: I0414 13:31:51.848943 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 13:31:51.849004 kubelet[2519]: I0414 13:31:51.848958 2519 policy_none.go:49] "None policy: Start" Apr 14 13:31:51.849004 kubelet[2519]: I0414 13:31:51.848967 2519 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 13:31:51.849004 kubelet[2519]: I0414 13:31:51.848974 2519 state_mem.go:35] "Initializing new in-memory state store" Apr 14 13:31:51.849124 kubelet[2519]: I0414 13:31:51.849057 2519 state_mem.go:75] "Updated machine memory state" Apr 14 13:31:51.859739 kubelet[2519]: E0414 13:31:51.859678 2519 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:31:51.860320 kubelet[2519]: E0414 13:31:51.860172 2519 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:31:51.860944 kubelet[2519]: I0414 13:31:51.860459 2519 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:31:51.860944 kubelet[2519]: I0414 13:31:51.860472 2519 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:31:51.860944 kubelet[2519]: I0414 13:31:51.860848 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:31:51.873917 kubelet[2519]: E0414 13:31:51.873246 2519 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:31:51.992864 kubelet[2519]: I0414 13:31:51.991766 2519 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:31:52.012166 kubelet[2519]: I0414 13:31:52.011748 2519 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 13:31:52.018113 kubelet[2519]: I0414 13:31:52.017177 2519 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:31:52.093680 kubelet[2519]: I0414 13:31:52.093281 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.124899 kubelet[2519]: I0414 13:31:52.124675 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:52.153748 kubelet[2519]: I0414 13:31:52.152668 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.172995 kubelet[2519]: E0414 13:31:52.172923 2519 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.175381 kubelet[2519]: E0414 13:31:52.175327 2519 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.254005 kubelet[2519]: I0414 13:31:52.253614 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6a01249a97f49501a987efc322eaa4c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a01249a97f49501a987efc322eaa4c\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.254005 kubelet[2519]: I0414 13:31:52.253686 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6a01249a97f49501a987efc322eaa4c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a01249a97f49501a987efc322eaa4c\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.254005 kubelet[2519]: I0414 13:31:52.253739 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6a01249a97f49501a987efc322eaa4c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6a01249a97f49501a987efc322eaa4c\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.254005 kubelet[2519]: I0414 13:31:52.253766 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.254005 kubelet[2519]: I0414 13:31:52.253836 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.254351 kubelet[2519]: I0414 13:31:52.253849 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.254351 kubelet[2519]: I0414 13:31:52.253860 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:31:52.254351 kubelet[2519]: I0414 13:31:52.253899 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.254351 kubelet[2519]: I0414 13:31:52.253916 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:31:52.475298 kubelet[2519]: E0414 13:31:52.474520 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:52.475298 kubelet[2519]: E0414 13:31:52.475191 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:52.475991 kubelet[2519]: E0414 13:31:52.475887 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:52.724474 kubelet[2519]: I0414 13:31:52.709186 2519 apiserver.go:52] "Watching apiserver" Apr 14 13:31:52.799972 kubelet[2519]: E0414 13:31:52.799922 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:52.803123 kubelet[2519]: I0414 13:31:52.800673 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.804523 kubelet[2519]: E0414 13:31:52.804484 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:52.829631 kubelet[2519]: I0414 13:31:52.829551 2519 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 13:31:52.831919 kubelet[2519]: E0414 13:31:52.831028 2519 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:31:52.832252 kubelet[2519]: E0414 13:31:52.832218 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:52.909029 kubelet[2519]: I0414 13:31:52.908428 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.90840209 podStartE2EDuration="908.40209ms" podCreationTimestamp="2026-04-14 13:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:31:52.856626468 +0000 UTC m=+1.474241045" watchObservedRunningTime="2026-04-14 13:31:52.90840209 +0000 UTC m=+1.526016746" Apr 14 13:31:52.980552 kubelet[2519]: I0414 13:31:52.980273 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.980247107 podStartE2EDuration="2.980247107s" podCreationTimestamp="2026-04-14 13:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:31:52.938209828 +0000 UTC m=+1.555824404" watchObservedRunningTime="2026-04-14 13:31:52.980247107 +0000 UTC m=+1.597861688" Apr 14 13:31:53.824930 kubelet[2519]: E0414 13:31:53.824596 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:53.828304 kubelet[2519]: E0414 13:31:53.825022 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:54.831038 kubelet[2519]: E0414 13:31:54.830894 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:54.990706 kubelet[2519]: I0414 13:31:54.990425 2519 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 13:31:54.992046 containerd[1463]: time="2026-04-14T13:31:54.991979318Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 13:31:54.992433 kubelet[2519]: I0414 13:31:54.992311 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 13:31:55.847589 kubelet[2519]: E0414 13:31:55.847525 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:55.864038 kubelet[2519]: E0414 13:31:55.862192 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.087489 systemd[1]: Created slice kubepods-besteffort-pod5052cd8d_8be1_4dcc_8ca5_1b2267b71d2d.slice - libcontainer container kubepods-besteffort-pod5052cd8d_8be1_4dcc_8ca5_1b2267b71d2d.slice. Apr 14 13:31:56.116544 kubelet[2519]: I0414 13:31:56.116034 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d-kube-proxy\") pod \"kube-proxy-h6fjm\" (UID: \"5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d\") " pod="kube-system/kube-proxy-h6fjm" Apr 14 13:31:56.116544 kubelet[2519]: I0414 13:31:56.116104 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d-lib-modules\") pod \"kube-proxy-h6fjm\" (UID: \"5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d\") " pod="kube-system/kube-proxy-h6fjm" Apr 14 13:31:56.116544 kubelet[2519]: I0414 13:31:56.116205 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d-xtables-lock\") pod \"kube-proxy-h6fjm\" (UID: \"5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d\") " pod="kube-system/kube-proxy-h6fjm" Apr 14 13:31:56.116544 kubelet[2519]: I0414 13:31:56.116273 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8ct\" (UniqueName: \"kubernetes.io/projected/5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d-kube-api-access-bm8ct\") pod \"kube-proxy-h6fjm\" (UID: \"5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d\") " pod="kube-system/kube-proxy-h6fjm" Apr 14 13:31:56.256504 systemd[1]: Created slice kubepods-besteffort-pod1053630a_08a2_4c9c_a980_c18c108ef0da.slice - libcontainer container kubepods-besteffort-pod1053630a_08a2_4c9c_a980_c18c108ef0da.slice. Apr 14 13:31:56.358317 kubelet[2519]: I0414 13:31:56.357902 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1053630a-08a2-4c9c-a980-c18c108ef0da-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-zrqfb\" (UID: \"1053630a-08a2-4c9c-a980-c18c108ef0da\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zrqfb" Apr 14 13:31:56.365088 kubelet[2519]: I0414 13:31:56.360944 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chz47\" (UniqueName: \"kubernetes.io/projected/1053630a-08a2-4c9c-a980-c18c108ef0da-kube-api-access-chz47\") pod \"tigera-operator-6bf85f8dd-zrqfb\" (UID: \"1053630a-08a2-4c9c-a980-c18c108ef0da\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zrqfb" Apr 14 13:31:56.413226 kubelet[2519]: E0414 13:31:56.410792 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.415901 containerd[1463]: time="2026-04-14T13:31:56.415219294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6fjm,Uid:5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d,Namespace:kube-system,Attempt:0,}" Apr 14 13:31:56.471729 containerd[1463]: time="2026-04-14T13:31:56.468011568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:56.471729 containerd[1463]: time="2026-04-14T13:31:56.468302362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:56.471729 containerd[1463]: time="2026-04-14T13:31:56.468321911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:56.471729 containerd[1463]: time="2026-04-14T13:31:56.468564238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:56.823284 systemd[1]: Started cri-containerd-e2303e9de5245ae352e2fc6b1ca206eb3b8491fe4b534f6f25436676f38b9ed9.scope - libcontainer container e2303e9de5245ae352e2fc6b1ca206eb3b8491fe4b534f6f25436676f38b9ed9. Apr 14 13:31:56.846861 kubelet[2519]: E0414 13:31:56.845494 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.876963 containerd[1463]: time="2026-04-14T13:31:56.876898620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zrqfb,Uid:1053630a-08a2-4c9c-a980-c18c108ef0da,Namespace:tigera-operator,Attempt:0,}" Apr 14 13:31:56.965908 containerd[1463]: time="2026-04-14T13:31:56.965427496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6fjm,Uid:5052cd8d-8be1-4dcc-8ca5-1b2267b71d2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2303e9de5245ae352e2fc6b1ca206eb3b8491fe4b534f6f25436676f38b9ed9\"" Apr 14 13:31:56.973411 kubelet[2519]: E0414 13:31:56.972973 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:56.988121 containerd[1463]: time="2026-04-14T13:31:56.987978502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:31:56.988121 containerd[1463]: time="2026-04-14T13:31:56.988073467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:31:56.988121 containerd[1463]: time="2026-04-14T13:31:56.988086798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:56.988464 containerd[1463]: time="2026-04-14T13:31:56.988180343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:31:56.989490 containerd[1463]: time="2026-04-14T13:31:56.989463435Z" level=info msg="CreateContainer within sandbox \"e2303e9de5245ae352e2fc6b1ca206eb3b8491fe4b534f6f25436676f38b9ed9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 13:31:57.022451 containerd[1463]: time="2026-04-14T13:31:57.022279032Z" level=info msg="CreateContainer within sandbox \"e2303e9de5245ae352e2fc6b1ca206eb3b8491fe4b534f6f25436676f38b9ed9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88e085365fcdc70f4fdd9f805c5b00858d9f0290061498d8c1a2da927b312654\"" Apr 14 13:31:57.024086 containerd[1463]: time="2026-04-14T13:31:57.023418728Z" level=info msg="StartContainer for \"88e085365fcdc70f4fdd9f805c5b00858d9f0290061498d8c1a2da927b312654\"" Apr 14 13:31:57.026018 systemd[1]: Started cri-containerd-86b10e686eb067c0ba23c80d3ebdb83dcf13a45c4f4f4118fa737bce8c3a0ad0.scope - libcontainer container 86b10e686eb067c0ba23c80d3ebdb83dcf13a45c4f4f4118fa737bce8c3a0ad0. Apr 14 13:31:57.092048 systemd[1]: Started cri-containerd-88e085365fcdc70f4fdd9f805c5b00858d9f0290061498d8c1a2da927b312654.scope - libcontainer container 88e085365fcdc70f4fdd9f805c5b00858d9f0290061498d8c1a2da927b312654. Apr 14 13:31:57.093122 containerd[1463]: time="2026-04-14T13:31:57.092893691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zrqfb,Uid:1053630a-08a2-4c9c-a980-c18c108ef0da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"86b10e686eb067c0ba23c80d3ebdb83dcf13a45c4f4f4118fa737bce8c3a0ad0\"" Apr 14 13:31:57.096857 containerd[1463]: time="2026-04-14T13:31:57.094988485Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 13:31:57.139076 containerd[1463]: time="2026-04-14T13:31:57.139032292Z" level=info msg="StartContainer for \"88e085365fcdc70f4fdd9f805c5b00858d9f0290061498d8c1a2da927b312654\" returns successfully" Apr 14 13:31:57.856319 kubelet[2519]: E0414 13:31:57.855835 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:31:57.887539 kubelet[2519]: I0414 13:31:57.887128 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6fjm" podStartSLOduration=2.887096307 podStartE2EDuration="2.887096307s" podCreationTimestamp="2026-04-14 13:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:31:57.886952774 +0000 UTC m=+6.504567341" watchObservedRunningTime="2026-04-14 13:31:57.887096307 +0000 UTC m=+6.504710870" Apr 14 13:31:58.534471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176526687.mount: Deactivated successfully. Apr 14 13:32:01.794915 containerd[1463]: time="2026-04-14T13:32:01.794492922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:01.796140 containerd[1463]: time="2026-04-14T13:32:01.795992172Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 13:32:01.797265 containerd[1463]: time="2026-04-14T13:32:01.797083315Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:01.801547 containerd[1463]: time="2026-04-14T13:32:01.801491287Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:01.802171 containerd[1463]: time="2026-04-14T13:32:01.802136012Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.705136587s" Apr 14 13:32:01.802171 containerd[1463]: time="2026-04-14T13:32:01.802167919Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 13:32:01.812274 containerd[1463]: time="2026-04-14T13:32:01.812221779Z" level=info msg="CreateContainer within sandbox \"86b10e686eb067c0ba23c80d3ebdb83dcf13a45c4f4f4118fa737bce8c3a0ad0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 13:32:01.841700 containerd[1463]: time="2026-04-14T13:32:01.841589614Z" level=info msg="CreateContainer within sandbox \"86b10e686eb067c0ba23c80d3ebdb83dcf13a45c4f4f4118fa737bce8c3a0ad0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b51a94e796b9e47fea3d6698bd685465d82c5b8c5351fc097277aa1db813ec96\"" Apr 14 13:32:01.845277 containerd[1463]: time="2026-04-14T13:32:01.845015261Z" level=info msg="StartContainer for \"b51a94e796b9e47fea3d6698bd685465d82c5b8c5351fc097277aa1db813ec96\"" Apr 14 13:32:01.887137 systemd[1]: Started cri-containerd-b51a94e796b9e47fea3d6698bd685465d82c5b8c5351fc097277aa1db813ec96.scope - libcontainer container b51a94e796b9e47fea3d6698bd685465d82c5b8c5351fc097277aa1db813ec96. Apr 14 13:32:01.987496 containerd[1463]: time="2026-04-14T13:32:01.987407281Z" level=info msg="StartContainer for \"b51a94e796b9e47fea3d6698bd685465d82c5b8c5351fc097277aa1db813ec96\" returns successfully" Apr 14 13:32:02.092304 kubelet[2519]: E0414 13:32:02.090966 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:02.894392 kubelet[2519]: E0414 13:32:02.894307 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:05.275192 update_engine[1451]: I20260414 13:32:05.265058 1451 update_attempter.cc:509] Updating boot flags... Apr 14 13:32:05.426849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2890) Apr 14 13:32:05.600880 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2891) Apr 14 13:32:08.259890 sudo[1645]: pam_unix(sudo:session): session closed for user root Apr 14 13:32:08.264535 sshd[1642]: pam_unix(sshd:session): session closed for user core Apr 14 13:32:08.267324 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:38040.service: Deactivated successfully. Apr 14 13:32:08.270346 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 13:32:08.270534 systemd[1]: session-7.scope: Consumed 6.882s CPU time, 162.4M memory peak, 0B memory swap peak. Apr 14 13:32:08.272538 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Apr 14 13:32:08.283426 systemd-logind[1450]: Removed session 7. Apr 14 13:32:15.054844 kubelet[2519]: I0414 13:32:15.054723 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-zrqfb" podStartSLOduration=14.343925706 podStartE2EDuration="19.05470006s" podCreationTimestamp="2026-04-14 13:31:56 +0000 UTC" firstStartedPulling="2026-04-14 13:31:57.094075138 +0000 UTC m=+5.711689716" lastFinishedPulling="2026-04-14 13:32:01.804849506 +0000 UTC m=+10.422464070" observedRunningTime="2026-04-14 13:32:03.009261789 +0000 UTC m=+11.626876373" watchObservedRunningTime="2026-04-14 13:32:15.05470006 +0000 UTC m=+23.672314643" Apr 14 13:32:15.082116 systemd[1]: Created slice kubepods-besteffort-pod65656f52_cffe_4f7e_9e9c_d62e79d1aae9.slice - libcontainer container kubepods-besteffort-pod65656f52_cffe_4f7e_9e9c_d62e79d1aae9.slice. Apr 14 13:32:15.097192 kubelet[2519]: I0414 13:32:15.097101 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65656f52-cffe-4f7e-9e9c-d62e79d1aae9-tigera-ca-bundle\") pod \"calico-typha-99446444b-f44zh\" (UID: \"65656f52-cffe-4f7e-9e9c-d62e79d1aae9\") " pod="calico-system/calico-typha-99446444b-f44zh" Apr 14 13:32:15.097192 kubelet[2519]: I0414 13:32:15.097165 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65656f52-cffe-4f7e-9e9c-d62e79d1aae9-typha-certs\") pod \"calico-typha-99446444b-f44zh\" (UID: \"65656f52-cffe-4f7e-9e9c-d62e79d1aae9\") " pod="calico-system/calico-typha-99446444b-f44zh" Apr 14 13:32:15.097192 kubelet[2519]: I0414 13:32:15.097195 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jt9q\" (UniqueName: \"kubernetes.io/projected/65656f52-cffe-4f7e-9e9c-d62e79d1aae9-kube-api-access-6jt9q\") pod \"calico-typha-99446444b-f44zh\" (UID: \"65656f52-cffe-4f7e-9e9c-d62e79d1aae9\") " pod="calico-system/calico-typha-99446444b-f44zh" Apr 14 13:32:15.278374 systemd[1]: Created slice kubepods-besteffort-pod6061aacd_eceb_4f0f_84f2_5378af446e6b.slice - libcontainer container kubepods-besteffort-pod6061aacd_eceb_4f0f_84f2_5378af446e6b.slice. Apr 14 13:32:15.306415 kubelet[2519]: I0414 13:32:15.306222 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhcd\" (UniqueName: \"kubernetes.io/projected/6061aacd-eceb-4f0f-84f2-5378af446e6b-kube-api-access-hxhcd\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.306415 kubelet[2519]: I0414 13:32:15.306290 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-cni-net-dir\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.306415 kubelet[2519]: I0414 13:32:15.306303 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6061aacd-eceb-4f0f-84f2-5378af446e6b-tigera-ca-bundle\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.306415 kubelet[2519]: I0414 13:32:15.306341 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-cni-bin-dir\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.306415 kubelet[2519]: I0414 13:32:15.306353 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-cni-log-dir\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.306741 kubelet[2519]: I0414 13:32:15.306365 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-var-run-calico\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.306741 kubelet[2519]: I0414 13:32:15.306380 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-nodeproc\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307154 kubelet[2519]: I0414 13:32:15.306394 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-xtables-lock\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307154 kubelet[2519]: I0414 13:32:15.307091 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-policysync\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307154 kubelet[2519]: I0414 13:32:15.307113 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-sys-fs\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307154 kubelet[2519]: I0414 13:32:15.307128 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-bpffs\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307154 kubelet[2519]: I0414 13:32:15.307142 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6061aacd-eceb-4f0f-84f2-5378af446e6b-node-certs\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307154 kubelet[2519]: I0414 13:32:15.307159 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-var-lib-calico\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307459 kubelet[2519]: I0414 13:32:15.307177 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-flexvol-driver-host\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.307459 kubelet[2519]: I0414 13:32:15.307189 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6061aacd-eceb-4f0f-84f2-5378af446e6b-lib-modules\") pod \"calico-node-p27zw\" (UID: \"6061aacd-eceb-4f0f-84f2-5378af446e6b\") " pod="calico-system/calico-node-p27zw" Apr 14 13:32:15.394105 kubelet[2519]: E0414 13:32:15.392521 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:15.482230 containerd[1463]: time="2026-04-14T13:32:15.474150315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-99446444b-f44zh,Uid:65656f52-cffe-4f7e-9e9c-d62e79d1aae9,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:15.513182 kubelet[2519]: E0414 13:32:15.513143 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:15.513357 kubelet[2519]: W0414 13:32:15.513342 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:15.513422 kubelet[2519]: E0414 13:32:15.513411 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:15.517029 kubelet[2519]: E0414 13:32:15.516889 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:15.517029 kubelet[2519]: W0414 13:32:15.516972 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:15.517261 kubelet[2519]: E0414 13:32:15.517142 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:15.674377 containerd[1463]: time="2026-04-14T13:32:15.672222855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:15.674377 containerd[1463]: time="2026-04-14T13:32:15.672302131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:15.674377 containerd[1463]: time="2026-04-14T13:32:15.672316576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:15.674377 containerd[1463]: time="2026-04-14T13:32:15.672411828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:15.692064 kubelet[2519]: E0414 13:32:15.690976 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:15.692064 kubelet[2519]: W0414 13:32:15.691007 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:15.692064 kubelet[2519]: E0414 13:32:15.691111 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:15.783557 systemd[1]: Started cri-containerd-3cd7b06c6f905323dedd7b979cd46cd26e51712fdf000caf90a26520f39c1d00.scope - libcontainer container 3cd7b06c6f905323dedd7b979cd46cd26e51712fdf000caf90a26520f39c1d00. Apr 14 13:32:15.895108 containerd[1463]: time="2026-04-14T13:32:15.894869068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p27zw,Uid:6061aacd-eceb-4f0f-84f2-5378af446e6b,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:15.959986 kubelet[2519]: E0414 13:32:15.959758 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:16.041361 containerd[1463]: time="2026-04-14T13:32:16.033531048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:16.041361 containerd[1463]: time="2026-04-14T13:32:16.033601088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:16.041361 containerd[1463]: time="2026-04-14T13:32:16.033614609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:16.041361 containerd[1463]: time="2026-04-14T13:32:16.034771298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:16.055743 kubelet[2519]: E0414 13:32:16.055609 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.056606 kubelet[2519]: W0414 13:32:16.056455 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.056606 kubelet[2519]: E0414 13:32:16.056580 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.057095 kubelet[2519]: E0414 13:32:16.057052 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.057095 kubelet[2519]: W0414 13:32:16.057062 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.057255 kubelet[2519]: E0414 13:32:16.057178 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.057446 kubelet[2519]: E0414 13:32:16.057438 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.057517 kubelet[2519]: W0414 13:32:16.057481 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.057591 kubelet[2519]: E0414 13:32:16.057567 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.061200 kubelet[2519]: E0414 13:32:16.061063 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.061200 kubelet[2519]: W0414 13:32:16.061112 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.061200 kubelet[2519]: E0414 13:32:16.061126 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.061633 kubelet[2519]: E0414 13:32:16.061551 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.061633 kubelet[2519]: W0414 13:32:16.061559 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.061633 kubelet[2519]: E0414 13:32:16.061573 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.062165 kubelet[2519]: E0414 13:32:16.062010 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.062165 kubelet[2519]: W0414 13:32:16.062022 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.062165 kubelet[2519]: E0414 13:32:16.062061 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.062607 kubelet[2519]: E0414 13:32:16.062503 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.062607 kubelet[2519]: W0414 13:32:16.062512 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.062607 kubelet[2519]: E0414 13:32:16.062520 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.062747 kubelet[2519]: E0414 13:32:16.062727 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.062747 kubelet[2519]: W0414 13:32:16.062737 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.062779 kubelet[2519]: E0414 13:32:16.062748 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.063203 kubelet[2519]: E0414 13:32:16.062994 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.063203 kubelet[2519]: W0414 13:32:16.063003 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.063203 kubelet[2519]: E0414 13:32:16.063018 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.063489 kubelet[2519]: E0414 13:32:16.063383 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.063489 kubelet[2519]: W0414 13:32:16.063398 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.063489 kubelet[2519]: E0414 13:32:16.063429 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.063858 kubelet[2519]: E0414 13:32:16.063758 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.063858 kubelet[2519]: W0414 13:32:16.063768 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.063858 kubelet[2519]: E0414 13:32:16.063776 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.064604 kubelet[2519]: E0414 13:32:16.064092 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.064604 kubelet[2519]: W0414 13:32:16.064099 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.064604 kubelet[2519]: E0414 13:32:16.064110 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.068336 kubelet[2519]: E0414 13:32:16.068304 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.068575 kubelet[2519]: W0414 13:32:16.068435 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.068575 kubelet[2519]: E0414 13:32:16.068481 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.070788 kubelet[2519]: E0414 13:32:16.070480 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.070788 kubelet[2519]: W0414 13:32:16.070538 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.070788 kubelet[2519]: E0414 13:32:16.070596 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.072213 kubelet[2519]: E0414 13:32:16.071275 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.078897 kubelet[2519]: W0414 13:32:16.077940 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.078897 kubelet[2519]: E0414 13:32:16.078225 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.081457 containerd[1463]: time="2026-04-14T13:32:16.081329985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-99446444b-f44zh,Uid:65656f52-cffe-4f7e-9e9c-d62e79d1aae9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cd7b06c6f905323dedd7b979cd46cd26e51712fdf000caf90a26520f39c1d00\"" Apr 14 13:32:16.087830 kubelet[2519]: E0414 13:32:16.087523 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.088061 kubelet[2519]: W0414 13:32:16.088046 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.088609 kubelet[2519]: E0414 13:32:16.088479 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.090713 kubelet[2519]: E0414 13:32:16.090572 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.095569 kubelet[2519]: W0414 13:32:16.090588 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.096064 kubelet[2519]: E0414 13:32:16.094466 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:16.096710 kubelet[2519]: E0414 13:32:16.096696 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.098285 containerd[1463]: time="2026-04-14T13:32:16.098261307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 14 13:32:16.099023 kubelet[2519]: E0414 13:32:16.098982 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.099023 kubelet[2519]: W0414 13:32:16.099012 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.099182 kubelet[2519]: E0414 13:32:16.099032 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.100342 kubelet[2519]: E0414 13:32:16.099612 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.101615 kubelet[2519]: W0414 13:32:16.101415 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.102581 kubelet[2519]: E0414 13:32:16.101693 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.114585 kubelet[2519]: E0414 13:32:16.114493 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.114737 kubelet[2519]: W0414 13:32:16.114603 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.114737 kubelet[2519]: E0414 13:32:16.114719 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.128922 kubelet[2519]: E0414 13:32:16.128718 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.128922 kubelet[2519]: W0414 13:32:16.128875 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.129294 kubelet[2519]: E0414 13:32:16.129029 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.131248 kubelet[2519]: I0414 13:32:16.129253 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2556d03-7ce4-4031-9834-67fb67a536f0-socket-dir\") pod \"csi-node-driver-cps8s\" (UID: \"c2556d03-7ce4-4031-9834-67fb67a536f0\") " pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:16.172466 kubelet[2519]: E0414 13:32:16.167143 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.172466 kubelet[2519]: W0414 13:32:16.167220 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.172466 kubelet[2519]: E0414 13:32:16.167297 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.186436 kubelet[2519]: E0414 13:32:16.186195 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.186436 kubelet[2519]: W0414 13:32:16.186366 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.186884 kubelet[2519]: E0414 13:32:16.186488 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.187239 kubelet[2519]: E0414 13:32:16.187220 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.187308 kubelet[2519]: W0414 13:32:16.187240 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.187308 kubelet[2519]: E0414 13:32:16.187259 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.189916 kubelet[2519]: I0414 13:32:16.189409 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2556d03-7ce4-4031-9834-67fb67a536f0-kubelet-dir\") pod \"csi-node-driver-cps8s\" (UID: \"c2556d03-7ce4-4031-9834-67fb67a536f0\") " pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:16.193902 kubelet[2519]: E0414 13:32:16.191653 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.195594 kubelet[2519]: W0414 13:32:16.195461 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.195874 kubelet[2519]: E0414 13:32:16.195611 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.196248 kubelet[2519]: E0414 13:32:16.196055 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.196248 kubelet[2519]: W0414 13:32:16.196067 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.196248 kubelet[2519]: E0414 13:32:16.196077 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.239800 kubelet[2519]: E0414 13:32:16.239482 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.239800 kubelet[2519]: W0414 13:32:16.239676 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.239800 kubelet[2519]: E0414 13:32:16.239758 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.242981 kubelet[2519]: I0414 13:32:16.242785 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln5vs\" (UniqueName: \"kubernetes.io/projected/c2556d03-7ce4-4031-9834-67fb67a536f0-kube-api-access-ln5vs\") pod \"csi-node-driver-cps8s\" (UID: \"c2556d03-7ce4-4031-9834-67fb67a536f0\") " pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:16.262502 systemd[1]: Started cri-containerd-b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004.scope - libcontainer container b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004. Apr 14 13:32:16.274912 kubelet[2519]: E0414 13:32:16.274506 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.275266 kubelet[2519]: W0414 13:32:16.275018 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.275266 kubelet[2519]: E0414 13:32:16.275108 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.276340 kubelet[2519]: E0414 13:32:16.276219 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.276340 kubelet[2519]: W0414 13:32:16.276316 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.276541 kubelet[2519]: E0414 13:32:16.276372 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.276728 kubelet[2519]: E0414 13:32:16.276699 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.276728 kubelet[2519]: W0414 13:32:16.276724 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.276947 kubelet[2519]: E0414 13:32:16.276736 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.278672 kubelet[2519]: I0414 13:32:16.278610 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2556d03-7ce4-4031-9834-67fb67a536f0-registration-dir\") pod \"csi-node-driver-cps8s\" (UID: \"c2556d03-7ce4-4031-9834-67fb67a536f0\") " pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:16.278997 kubelet[2519]: E0414 13:32:16.278941 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.278997 kubelet[2519]: W0414 13:32:16.278962 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.278997 kubelet[2519]: E0414 13:32:16.278972 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.280539 kubelet[2519]: E0414 13:32:16.280431 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.280539 kubelet[2519]: W0414 13:32:16.280491 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.280759 kubelet[2519]: E0414 13:32:16.280569 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.282735 kubelet[2519]: E0414 13:32:16.282539 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.282735 kubelet[2519]: W0414 13:32:16.282711 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.283092 kubelet[2519]: E0414 13:32:16.282762 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.283715 kubelet[2519]: I0414 13:32:16.283677 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c2556d03-7ce4-4031-9834-67fb67a536f0-varrun\") pod \"csi-node-driver-cps8s\" (UID: \"c2556d03-7ce4-4031-9834-67fb67a536f0\") " pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:16.283906 kubelet[2519]: E0414 13:32:16.283877 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.283906 kubelet[2519]: W0414 13:32:16.283898 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.283906 kubelet[2519]: E0414 13:32:16.283907 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.287205 kubelet[2519]: E0414 13:32:16.287020 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.287437 kubelet[2519]: W0414 13:32:16.287223 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.287437 kubelet[2519]: E0414 13:32:16.287262 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.376748 containerd[1463]: time="2026-04-14T13:32:16.376212994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p27zw,Uid:6061aacd-eceb-4f0f-84f2-5378af446e6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\"" Apr 14 13:32:16.386022 kubelet[2519]: E0414 13:32:16.385860 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.386022 kubelet[2519]: W0414 13:32:16.385881 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.386022 kubelet[2519]: E0414 13:32:16.385899 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.387718 kubelet[2519]: E0414 13:32:16.386930 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.387718 kubelet[2519]: W0414 13:32:16.387018 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.387718 kubelet[2519]: E0414 13:32:16.387067 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.388578 kubelet[2519]: E0414 13:32:16.388449 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.388578 kubelet[2519]: W0414 13:32:16.388561 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.388709 kubelet[2519]: E0414 13:32:16.388616 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.389472 kubelet[2519]: E0414 13:32:16.389148 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.389472 kubelet[2519]: W0414 13:32:16.389162 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.389472 kubelet[2519]: E0414 13:32:16.389174 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.390175 kubelet[2519]: E0414 13:32:16.389571 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.390175 kubelet[2519]: W0414 13:32:16.389578 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.390175 kubelet[2519]: E0414 13:32:16.389588 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.391460 kubelet[2519]: E0414 13:32:16.391253 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.391460 kubelet[2519]: W0414 13:32:16.391265 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.391460 kubelet[2519]: E0414 13:32:16.391282 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.391732 kubelet[2519]: E0414 13:32:16.391613 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.391732 kubelet[2519]: W0414 13:32:16.391636 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.391732 kubelet[2519]: E0414 13:32:16.391645 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.391880 kubelet[2519]: E0414 13:32:16.391874 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.391931 kubelet[2519]: W0414 13:32:16.391927 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.391987 kubelet[2519]: E0414 13:32:16.391964 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.398568 kubelet[2519]: E0414 13:32:16.398097 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.398568 kubelet[2519]: W0414 13:32:16.398287 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.398568 kubelet[2519]: E0414 13:32:16.398348 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.398969 kubelet[2519]: E0414 13:32:16.398958 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.399028 kubelet[2519]: W0414 13:32:16.399019 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.399074 kubelet[2519]: E0414 13:32:16.399068 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.399388 kubelet[2519]: E0414 13:32:16.399377 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.399452 kubelet[2519]: W0414 13:32:16.399443 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.399506 kubelet[2519]: E0414 13:32:16.399497 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.400125 kubelet[2519]: E0414 13:32:16.400008 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.400125 kubelet[2519]: W0414 13:32:16.400019 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.400125 kubelet[2519]: E0414 13:32:16.400030 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.400305 kubelet[2519]: E0414 13:32:16.400296 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.400411 kubelet[2519]: W0414 13:32:16.400342 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.400411 kubelet[2519]: E0414 13:32:16.400353 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.400753 kubelet[2519]: E0414 13:32:16.400728 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.400905 kubelet[2519]: W0414 13:32:16.400895 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.400956 kubelet[2519]: E0414 13:32:16.400948 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.401247 kubelet[2519]: E0414 13:32:16.401236 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.401303 kubelet[2519]: W0414 13:32:16.401297 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.401335 kubelet[2519]: E0414 13:32:16.401329 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.401571 kubelet[2519]: E0414 13:32:16.401564 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.402234 kubelet[2519]: W0414 13:32:16.401779 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.402234 kubelet[2519]: E0414 13:32:16.401790 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.402360 kubelet[2519]: E0414 13:32:16.402350 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.402400 kubelet[2519]: W0414 13:32:16.402393 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.402440 kubelet[2519]: E0414 13:32:16.402434 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.403041 kubelet[2519]: E0414 13:32:16.403027 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.403214 kubelet[2519]: W0414 13:32:16.403204 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.403263 kubelet[2519]: E0414 13:32:16.403255 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.404086 kubelet[2519]: E0414 13:32:16.404072 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.404156 kubelet[2519]: W0414 13:32:16.404147 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.404201 kubelet[2519]: E0414 13:32:16.404194 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.404515 kubelet[2519]: E0414 13:32:16.404503 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.404584 kubelet[2519]: W0414 13:32:16.404575 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.404697 kubelet[2519]: E0414 13:32:16.404687 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.405334 kubelet[2519]: E0414 13:32:16.405323 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.405378 kubelet[2519]: W0414 13:32:16.405373 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.405408 kubelet[2519]: E0414 13:32:16.405401 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.405738 kubelet[2519]: E0414 13:32:16.405731 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.405776 kubelet[2519]: W0414 13:32:16.405771 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.405895 kubelet[2519]: E0414 13:32:16.405798 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.406248 kubelet[2519]: E0414 13:32:16.406241 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.406284 kubelet[2519]: W0414 13:32:16.406279 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.406311 kubelet[2519]: E0414 13:32:16.406306 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.444303 kubelet[2519]: E0414 13:32:16.443588 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.446607 kubelet[2519]: W0414 13:32:16.445933 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.448529 kubelet[2519]: E0414 13:32:16.448370 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.452264 kubelet[2519]: E0414 13:32:16.452223 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.452264 kubelet[2519]: W0414 13:32:16.452250 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.452432 kubelet[2519]: E0414 13:32:16.452323 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:16.489088 kubelet[2519]: E0414 13:32:16.486922 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:16.489088 kubelet[2519]: W0414 13:32:16.486941 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:16.489088 kubelet[2519]: E0414 13:32:16.486961 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:17.768430 kubelet[2519]: E0414 13:32:17.768146 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:18.273543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171770802.mount: Deactivated successfully. Apr 14 13:32:19.614975 containerd[1463]: time="2026-04-14T13:32:19.614911980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:19.616110 containerd[1463]: time="2026-04-14T13:32:19.616029175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 14 13:32:19.619049 containerd[1463]: time="2026-04-14T13:32:19.619000055Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:19.623722 containerd[1463]: time="2026-04-14T13:32:19.622802570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:19.624936 containerd[1463]: time="2026-04-14T13:32:19.624881379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.52551051s" Apr 14 13:32:19.624996 containerd[1463]: time="2026-04-14T13:32:19.624969861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 14 13:32:19.630847 containerd[1463]: time="2026-04-14T13:32:19.630761982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 14 13:32:19.675468 containerd[1463]: time="2026-04-14T13:32:19.675372273Z" level=info msg="CreateContainer within sandbox \"3cd7b06c6f905323dedd7b979cd46cd26e51712fdf000caf90a26520f39c1d00\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 14 13:32:19.708288 containerd[1463]: time="2026-04-14T13:32:19.708032872Z" level=info msg="CreateContainer within sandbox \"3cd7b06c6f905323dedd7b979cd46cd26e51712fdf000caf90a26520f39c1d00\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c3bdbc5272905b71705c5bc69122886a9daa7e48bb98cc307d74ccf059f807fb\"" Apr 14 13:32:19.710872 containerd[1463]: time="2026-04-14T13:32:19.710228299Z" level=info msg="StartContainer for \"c3bdbc5272905b71705c5bc69122886a9daa7e48bb98cc307d74ccf059f807fb\"" Apr 14 13:32:19.766424 systemd[1]: Started cri-containerd-c3bdbc5272905b71705c5bc69122886a9daa7e48bb98cc307d74ccf059f807fb.scope - libcontainer container c3bdbc5272905b71705c5bc69122886a9daa7e48bb98cc307d74ccf059f807fb. Apr 14 13:32:19.770886 kubelet[2519]: E0414 13:32:19.769001 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:19.910787 containerd[1463]: time="2026-04-14T13:32:19.910315704Z" level=info msg="StartContainer for \"c3bdbc5272905b71705c5bc69122886a9daa7e48bb98cc307d74ccf059f807fb\" returns successfully" Apr 14 13:32:20.273362 kubelet[2519]: E0414 13:32:20.272400 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:20.273362 kubelet[2519]: E0414 13:32:20.273011 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.273362 kubelet[2519]: W0414 13:32:20.273030 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.273362 kubelet[2519]: E0414 13:32:20.273054 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.282796 kubelet[2519]: E0414 13:32:20.274219 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.282796 kubelet[2519]: W0414 13:32:20.280799 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.282796 kubelet[2519]: E0414 13:32:20.280909 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.285453 kubelet[2519]: E0414 13:32:20.284042 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.285453 kubelet[2519]: W0414 13:32:20.284083 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.285453 kubelet[2519]: E0414 13:32:20.284128 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.285453 kubelet[2519]: E0414 13:32:20.285197 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.285453 kubelet[2519]: W0414 13:32:20.285217 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.285453 kubelet[2519]: E0414 13:32:20.285294 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.288921 kubelet[2519]: E0414 13:32:20.288256 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.288921 kubelet[2519]: W0414 13:32:20.288306 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.288921 kubelet[2519]: E0414 13:32:20.288320 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.292574 kubelet[2519]: E0414 13:32:20.289481 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.292574 kubelet[2519]: W0414 13:32:20.289516 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.292574 kubelet[2519]: E0414 13:32:20.289566 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.293743 kubelet[2519]: E0414 13:32:20.293705 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.293743 kubelet[2519]: W0414 13:32:20.293732 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.293919 kubelet[2519]: E0414 13:32:20.293747 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.294737 kubelet[2519]: E0414 13:32:20.294009 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.294737 kubelet[2519]: W0414 13:32:20.294019 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.294737 kubelet[2519]: E0414 13:32:20.294026 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.294737 kubelet[2519]: E0414 13:32:20.294519 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.294737 kubelet[2519]: W0414 13:32:20.294526 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.294737 kubelet[2519]: E0414 13:32:20.294558 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.294966 kubelet[2519]: E0414 13:32:20.294844 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.294966 kubelet[2519]: W0414 13:32:20.294851 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.294966 kubelet[2519]: E0414 13:32:20.294858 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.297041 kubelet[2519]: E0414 13:32:20.295891 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.297041 kubelet[2519]: W0414 13:32:20.295932 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.297041 kubelet[2519]: E0414 13:32:20.295947 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.297041 kubelet[2519]: E0414 13:32:20.296300 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.297041 kubelet[2519]: W0414 13:32:20.296306 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.297041 kubelet[2519]: E0414 13:32:20.296313 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.303851 kubelet[2519]: E0414 13:32:20.302405 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.303851 kubelet[2519]: W0414 13:32:20.302447 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.303851 kubelet[2519]: E0414 13:32:20.302528 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.304041 kubelet[2519]: E0414 13:32:20.303874 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.304041 kubelet[2519]: W0414 13:32:20.303885 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.304041 kubelet[2519]: E0414 13:32:20.303898 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.304214 kubelet[2519]: E0414 13:32:20.304203 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.304214 kubelet[2519]: W0414 13:32:20.304212 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.304265 kubelet[2519]: E0414 13:32:20.304223 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.398886 kubelet[2519]: E0414 13:32:20.396692 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.398886 kubelet[2519]: W0414 13:32:20.396784 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.398886 kubelet[2519]: E0414 13:32:20.396880 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.398886 kubelet[2519]: E0414 13:32:20.398876 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.399257 kubelet[2519]: W0414 13:32:20.398891 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.399257 kubelet[2519]: E0414 13:32:20.398974 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.399459 kubelet[2519]: E0414 13:32:20.399417 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.399860 kubelet[2519]: W0414 13:32:20.399516 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.399860 kubelet[2519]: E0414 13:32:20.399541 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.448557 kubelet[2519]: E0414 13:32:20.448248 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.448557 kubelet[2519]: W0414 13:32:20.448294 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.448987 kubelet[2519]: E0414 13:32:20.448656 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.451021 kubelet[2519]: E0414 13:32:20.449695 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.451021 kubelet[2519]: W0414 13:32:20.449745 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.451021 kubelet[2519]: E0414 13:32:20.449763 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.460449 kubelet[2519]: E0414 13:32:20.456719 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.460449 kubelet[2519]: W0414 13:32:20.456742 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.460449 kubelet[2519]: E0414 13:32:20.456767 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.475518 kubelet[2519]: E0414 13:32:20.471524 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.475518 kubelet[2519]: W0414 13:32:20.475314 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.480883 kubelet[2519]: E0414 13:32:20.475479 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.489222 kubelet[2519]: E0414 13:32:20.489068 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.499860 kubelet[2519]: I0414 13:32:20.490277 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-99446444b-f44zh" podStartSLOduration=1.9570477720000001 podStartE2EDuration="5.490249257s" podCreationTimestamp="2026-04-14 13:32:15 +0000 UTC" firstStartedPulling="2026-04-14 13:32:16.097214543 +0000 UTC m=+24.714829107" lastFinishedPulling="2026-04-14 13:32:19.63041602 +0000 UTC m=+28.248030592" observedRunningTime="2026-04-14 13:32:20.457314164 +0000 UTC m=+29.074928741" watchObservedRunningTime="2026-04-14 13:32:20.490249257 +0000 UTC m=+29.107863866" Apr 14 13:32:20.503618 kubelet[2519]: W0414 13:32:20.493232 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.504127 kubelet[2519]: E0414 13:32:20.504104 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.517021 kubelet[2519]: E0414 13:32:20.514444 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.519911 kubelet[2519]: W0414 13:32:20.519835 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.534171 kubelet[2519]: E0414 13:32:20.530554 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.542695 kubelet[2519]: E0414 13:32:20.542445 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.543042 kubelet[2519]: W0414 13:32:20.542953 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.543144 kubelet[2519]: E0414 13:32:20.543096 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.545424 kubelet[2519]: E0414 13:32:20.545324 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.545424 kubelet[2519]: W0414 13:32:20.545383 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.547388 kubelet[2519]: E0414 13:32:20.545704 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.550697 kubelet[2519]: E0414 13:32:20.550655 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.550697 kubelet[2519]: W0414 13:32:20.550681 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.550888 kubelet[2519]: E0414 13:32:20.550697 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.551119 kubelet[2519]: E0414 13:32:20.551091 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.551119 kubelet[2519]: W0414 13:32:20.551108 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.551119 kubelet[2519]: E0414 13:32:20.551117 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.557611 kubelet[2519]: E0414 13:32:20.557349 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.561286 kubelet[2519]: W0414 13:32:20.558220 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.561712 kubelet[2519]: E0414 13:32:20.561583 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.563853 kubelet[2519]: E0414 13:32:20.563428 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.564602 kubelet[2519]: W0414 13:32:20.564187 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.592205 kubelet[2519]: E0414 13:32:20.591684 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.664585 kubelet[2519]: E0414 13:32:20.664419 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.664585 kubelet[2519]: W0414 13:32:20.664473 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.664585 kubelet[2519]: E0414 13:32:20.664509 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.665397 kubelet[2519]: E0414 13:32:20.665336 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.665397 kubelet[2519]: W0414 13:32:20.665373 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.665397 kubelet[2519]: E0414 13:32:20.665387 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:20.668871 kubelet[2519]: E0414 13:32:20.666569 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:20.668871 kubelet[2519]: W0414 13:32:20.666618 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:20.668871 kubelet[2519]: E0414 13:32:20.666789 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.290238 kubelet[2519]: E0414 13:32:21.289975 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:21.378952 kubelet[2519]: E0414 13:32:21.373570 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.378952 kubelet[2519]: W0414 13:32:21.378115 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.386160 kubelet[2519]: E0414 13:32:21.385907 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.393179 kubelet[2519]: E0414 13:32:21.393068 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.393460 kubelet[2519]: W0414 13:32:21.393448 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.393547 kubelet[2519]: E0414 13:32:21.393534 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.400459 kubelet[2519]: E0414 13:32:21.400253 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.400970 kubelet[2519]: W0414 13:32:21.400937 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.401106 kubelet[2519]: E0414 13:32:21.401072 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.466450 kubelet[2519]: E0414 13:32:21.466320 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.466450 kubelet[2519]: W0414 13:32:21.466430 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.466450 kubelet[2519]: E0414 13:32:21.466456 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.466931 kubelet[2519]: E0414 13:32:21.466904 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.466931 kubelet[2519]: W0414 13:32:21.466929 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.467121 kubelet[2519]: E0414 13:32:21.466943 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.467121 kubelet[2519]: E0414 13:32:21.467088 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.467121 kubelet[2519]: W0414 13:32:21.467094 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.467121 kubelet[2519]: E0414 13:32:21.467103 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.467419 kubelet[2519]: E0414 13:32:21.467223 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.469196 kubelet[2519]: W0414 13:32:21.468678 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.471259 kubelet[2519]: E0414 13:32:21.471123 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.484284 kubelet[2519]: E0414 13:32:21.484119 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.484284 kubelet[2519]: W0414 13:32:21.484246 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.484759 kubelet[2519]: E0414 13:32:21.484329 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.505709 kubelet[2519]: E0414 13:32:21.504569 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.506377 kubelet[2519]: W0414 13:32:21.506107 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.506449 kubelet[2519]: E0414 13:32:21.506407 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.515777 kubelet[2519]: E0414 13:32:21.515373 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.516453 kubelet[2519]: W0414 13:32:21.516307 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.516453 kubelet[2519]: E0414 13:32:21.516428 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.527204 kubelet[2519]: E0414 13:32:21.521902 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.527204 kubelet[2519]: W0414 13:32:21.523137 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.527204 kubelet[2519]: E0414 13:32:21.523279 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.528834 kubelet[2519]: E0414 13:32:21.528708 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.532244 kubelet[2519]: W0414 13:32:21.531884 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.532502 kubelet[2519]: E0414 13:32:21.532267 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.549692 kubelet[2519]: E0414 13:32:21.549165 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.549692 kubelet[2519]: W0414 13:32:21.549541 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.554244 kubelet[2519]: E0414 13:32:21.553416 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.563253 kubelet[2519]: E0414 13:32:21.560451 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.563253 kubelet[2519]: W0414 13:32:21.561421 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.570042 kubelet[2519]: E0414 13:32:21.566119 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.578992 kubelet[2519]: E0414 13:32:21.577527 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.578992 kubelet[2519]: W0414 13:32:21.577554 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.587263 kubelet[2519]: E0414 13:32:21.580785 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.588312 kubelet[2519]: E0414 13:32:21.588189 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.588312 kubelet[2519]: W0414 13:32:21.588276 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.588312 kubelet[2519]: E0414 13:32:21.588315 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.588716 kubelet[2519]: E0414 13:32:21.588686 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.588716 kubelet[2519]: W0414 13:32:21.588710 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.588794 kubelet[2519]: E0414 13:32:21.588722 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.588942 kubelet[2519]: E0414 13:32:21.588917 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.588942 kubelet[2519]: W0414 13:32:21.588937 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.588986 kubelet[2519]: E0414 13:32:21.588945 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.589891 kubelet[2519]: E0414 13:32:21.589756 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.589891 kubelet[2519]: W0414 13:32:21.589867 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.589891 kubelet[2519]: E0414 13:32:21.589886 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.591853 kubelet[2519]: E0414 13:32:21.590197 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.591853 kubelet[2519]: W0414 13:32:21.590205 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.591853 kubelet[2519]: E0414 13:32:21.590212 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.591853 kubelet[2519]: E0414 13:32:21.590538 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.591853 kubelet[2519]: W0414 13:32:21.590544 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.591853 kubelet[2519]: E0414 13:32:21.590552 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.592077 kubelet[2519]: E0414 13:32:21.591959 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.592077 kubelet[2519]: W0414 13:32:21.591995 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.592077 kubelet[2519]: E0414 13:32:21.592023 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.594962 kubelet[2519]: E0414 13:32:21.594898 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.594962 kubelet[2519]: W0414 13:32:21.594946 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.594962 kubelet[2519]: E0414 13:32:21.594960 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.607018 kubelet[2519]: E0414 13:32:21.603594 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.607018 kubelet[2519]: W0414 13:32:21.604256 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.607018 kubelet[2519]: E0414 13:32:21.604303 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.607018 kubelet[2519]: E0414 13:32:21.604842 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.607018 kubelet[2519]: W0414 13:32:21.604851 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.607018 kubelet[2519]: E0414 13:32:21.604861 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.607018 kubelet[2519]: E0414 13:32:21.605063 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.607018 kubelet[2519]: W0414 13:32:21.605069 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.607018 kubelet[2519]: E0414 13:32:21.605076 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.611617 kubelet[2519]: E0414 13:32:21.611505 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.611617 kubelet[2519]: W0414 13:32:21.611596 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.612165 kubelet[2519]: E0414 13:32:21.611650 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.613043 kubelet[2519]: E0414 13:32:21.612953 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.613043 kubelet[2519]: W0414 13:32:21.613020 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.613043 kubelet[2519]: E0414 13:32:21.613039 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.613482 kubelet[2519]: E0414 13:32:21.613440 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.613482 kubelet[2519]: W0414 13:32:21.613456 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.613482 kubelet[2519]: E0414 13:32:21.613464 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.613973 kubelet[2519]: E0414 13:32:21.613931 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.613973 kubelet[2519]: W0414 13:32:21.613954 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.613973 kubelet[2519]: E0414 13:32:21.613965 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.614603 kubelet[2519]: E0414 13:32:21.614566 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.614603 kubelet[2519]: W0414 13:32:21.614586 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.614603 kubelet[2519]: E0414 13:32:21.614594 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.617938 kubelet[2519]: E0414 13:32:21.614852 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.617938 kubelet[2519]: W0414 13:32:21.614861 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.617938 kubelet[2519]: E0414 13:32:21.614868 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.617938 kubelet[2519]: E0414 13:32:21.615121 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:32:21.617938 kubelet[2519]: W0414 13:32:21.615126 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:32:21.617938 kubelet[2519]: E0414 13:32:21.615132 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:32:21.761344 kubelet[2519]: E0414 13:32:21.761261 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:22.535480 containerd[1463]: time="2026-04-14T13:32:22.535364020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:22.539864 containerd[1463]: time="2026-04-14T13:32:22.539726350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 14 13:32:22.541401 containerd[1463]: time="2026-04-14T13:32:22.541288442Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:22.546742 containerd[1463]: time="2026-04-14T13:32:22.546602628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:22.547670 containerd[1463]: time="2026-04-14T13:32:22.547612530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.916801902s" Apr 14 13:32:22.547772 containerd[1463]: time="2026-04-14T13:32:22.547679612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 14 13:32:22.723876 containerd[1463]: time="2026-04-14T13:32:22.722209144Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 14 13:32:22.784319 containerd[1463]: time="2026-04-14T13:32:22.784233983Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12\"" Apr 14 13:32:22.794123 containerd[1463]: time="2026-04-14T13:32:22.789033585Z" level=info msg="StartContainer for \"22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12\"" Apr 14 13:32:22.989383 systemd[1]: Started cri-containerd-22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12.scope - libcontainer container 22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12. Apr 14 13:32:23.155865 containerd[1463]: time="2026-04-14T13:32:23.154450853Z" level=info msg="StartContainer for \"22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12\" returns successfully" Apr 14 13:32:23.174073 systemd[1]: cri-containerd-22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12.scope: Deactivated successfully. Apr 14 13:32:23.362152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12-rootfs.mount: Deactivated successfully. Apr 14 13:32:23.490011 containerd[1463]: time="2026-04-14T13:32:23.489126261Z" level=info msg="shim disconnected" id=22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12 namespace=k8s.io Apr 14 13:32:23.496115 containerd[1463]: time="2026-04-14T13:32:23.492211004Z" level=warning msg="cleaning up after shim disconnected" id=22b8e47878f3b0d2639f03eaab157e4c184e96fc86fd1706376f650b6bf66a12 namespace=k8s.io Apr 14 13:32:23.496115 containerd[1463]: time="2026-04-14T13:32:23.495111887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:32:23.784931 kubelet[2519]: E0414 13:32:23.784563 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:24.517467 containerd[1463]: time="2026-04-14T13:32:24.517403381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 14 13:32:25.762879 kubelet[2519]: E0414 13:32:25.759787 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:27.778917 kubelet[2519]: E0414 13:32:27.778430 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:29.759383 kubelet[2519]: E0414 13:32:29.759314 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:31.763801 kubelet[2519]: E0414 13:32:31.763736 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:33.776181 kubelet[2519]: E0414 13:32:33.775991 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:35.277426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477765841.mount: Deactivated successfully. Apr 14 13:32:35.381982 containerd[1463]: time="2026-04-14T13:32:35.381890938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:35.384924 containerd[1463]: time="2026-04-14T13:32:35.384540321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 14 13:32:35.390792 containerd[1463]: time="2026-04-14T13:32:35.390265141Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:35.400955 containerd[1463]: time="2026-04-14T13:32:35.400861420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:35.447674 containerd[1463]: time="2026-04-14T13:32:35.443506124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.926043498s" Apr 14 13:32:35.447674 containerd[1463]: time="2026-04-14T13:32:35.443619188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 14 13:32:35.467091 containerd[1463]: time="2026-04-14T13:32:35.467006427Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 14 13:32:35.511074 containerd[1463]: time="2026-04-14T13:32:35.510984026Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee\"" Apr 14 13:32:35.528089 containerd[1463]: time="2026-04-14T13:32:35.527783090Z" level=info msg="StartContainer for \"400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee\"" Apr 14 13:32:35.771070 kubelet[2519]: E0414 13:32:35.770647 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:35.791465 systemd[1]: Started cri-containerd-400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee.scope - libcontainer container 400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee. Apr 14 13:32:36.029068 containerd[1463]: time="2026-04-14T13:32:36.027788532Z" level=info msg="StartContainer for \"400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee\" returns successfully" Apr 14 13:32:36.416378 systemd[1]: cri-containerd-400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee.scope: Deactivated successfully. Apr 14 13:32:36.575220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee-rootfs.mount: Deactivated successfully. Apr 14 13:32:36.701242 containerd[1463]: time="2026-04-14T13:32:36.698568308Z" level=info msg="shim disconnected" id=400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee namespace=k8s.io Apr 14 13:32:36.701242 containerd[1463]: time="2026-04-14T13:32:36.699382632Z" level=warning msg="cleaning up after shim disconnected" id=400cba2a8f333c40451bb0d9616788df19f1cb59613c689b3e91979cbd051bee namespace=k8s.io Apr 14 13:32:36.701242 containerd[1463]: time="2026-04-14T13:32:36.699502776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:32:36.977731 containerd[1463]: time="2026-04-14T13:32:36.976197502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 14 13:32:37.760632 kubelet[2519]: E0414 13:32:37.760527 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:39.763198 kubelet[2519]: E0414 13:32:39.762396 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:41.816130 kubelet[2519]: E0414 13:32:41.815583 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:43.762139 kubelet[2519]: E0414 13:32:43.761547 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:43.881926 containerd[1463]: time="2026-04-14T13:32:43.880525066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:43.882831 containerd[1463]: time="2026-04-14T13:32:43.882698624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 14 13:32:43.884509 containerd[1463]: time="2026-04-14T13:32:43.884433876Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:43.922612 containerd[1463]: time="2026-04-14T13:32:43.897629903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:43.948956 containerd[1463]: time="2026-04-14T13:32:43.946836475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 6.970276579s" Apr 14 13:32:43.948956 containerd[1463]: time="2026-04-14T13:32:43.947646206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 14 13:32:44.035220 containerd[1463]: time="2026-04-14T13:32:44.032163127Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 13:32:44.121241 containerd[1463]: time="2026-04-14T13:32:44.120790560Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9\"" Apr 14 13:32:44.127761 containerd[1463]: time="2026-04-14T13:32:44.127292777Z" level=info msg="StartContainer for \"b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9\"" Apr 14 13:32:44.286088 systemd[1]: Started cri-containerd-b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9.scope - libcontainer container b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9. Apr 14 13:32:44.391872 containerd[1463]: time="2026-04-14T13:32:44.391711123Z" level=info msg="StartContainer for \"b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9\" returns successfully" Apr 14 13:32:45.770224 kubelet[2519]: E0414 13:32:45.765471 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:46.466215 systemd[1]: cri-containerd-b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9.scope: Deactivated successfully. Apr 14 13:32:46.466910 systemd[1]: cri-containerd-b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9.scope: Consumed 1.411s CPU time. Apr 14 13:32:46.554664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9-rootfs.mount: Deactivated successfully. Apr 14 13:32:46.558488 containerd[1463]: time="2026-04-14T13:32:46.556072630Z" level=info msg="shim disconnected" id=b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9 namespace=k8s.io Apr 14 13:32:46.558488 containerd[1463]: time="2026-04-14T13:32:46.556164628Z" level=warning msg="cleaning up after shim disconnected" id=b71a0c8cc7cf7d930e01f64d0786a1cca79251853bab6746b4cc497056fefaf9 namespace=k8s.io Apr 14 13:32:46.558488 containerd[1463]: time="2026-04-14T13:32:46.556171963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:32:46.581893 kubelet[2519]: I0414 13:32:46.581851 2519 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 13:32:46.644787 containerd[1463]: time="2026-04-14T13:32:46.644616403Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:32:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:32:47.231773 systemd[1]: Created slice kubepods-besteffort-pod47415411_3815_4e70_b149_e05ad96c0a9d.slice - libcontainer container kubepods-besteffort-pod47415411_3815_4e70_b149_e05ad96c0a9d.slice. Apr 14 13:32:47.335090 kubelet[2519]: I0414 13:32:47.334656 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tffj\" (UniqueName: \"kubernetes.io/projected/47415411-3815-4e70-b149-e05ad96c0a9d-kube-api-access-6tffj\") pod \"whisker-56b974bcd6-psxbc\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " pod="calico-system/whisker-56b974bcd6-psxbc" Apr 14 13:32:47.337298 kubelet[2519]: I0414 13:32:47.336688 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-backend-key-pair\") pod \"whisker-56b974bcd6-psxbc\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " pod="calico-system/whisker-56b974bcd6-psxbc" Apr 14 13:32:47.337298 kubelet[2519]: I0414 13:32:47.336774 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-ca-bundle\") pod \"whisker-56b974bcd6-psxbc\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " pod="calico-system/whisker-56b974bcd6-psxbc" Apr 14 13:32:47.337298 kubelet[2519]: I0414 13:32:47.336835 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-nginx-config\") pod \"whisker-56b974bcd6-psxbc\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " pod="calico-system/whisker-56b974bcd6-psxbc" Apr 14 13:32:47.392155 systemd[1]: Created slice kubepods-besteffort-poda573c7f5_88e5_4897_8831_187a489d5981.slice - libcontainer container kubepods-besteffort-poda573c7f5_88e5_4897_8831_187a489d5981.slice. Apr 14 13:32:47.461196 kubelet[2519]: I0414 13:32:47.456461 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eec711c2-8d03-4974-9177-e6d5f178fa6e-config-volume\") pod \"coredns-674b8bbfcf-t4bm4\" (UID: \"eec711c2-8d03-4974-9177-e6d5f178fa6e\") " pod="kube-system/coredns-674b8bbfcf-t4bm4" Apr 14 13:32:47.461196 kubelet[2519]: I0414 13:32:47.456515 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfkkt\" (UniqueName: \"kubernetes.io/projected/eec711c2-8d03-4974-9177-e6d5f178fa6e-kube-api-access-rfkkt\") pod \"coredns-674b8bbfcf-t4bm4\" (UID: \"eec711c2-8d03-4974-9177-e6d5f178fa6e\") " pod="kube-system/coredns-674b8bbfcf-t4bm4" Apr 14 13:32:47.461196 kubelet[2519]: I0414 13:32:47.456557 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvbb\" (UniqueName: \"kubernetes.io/projected/c29fe4b2-bccb-43ac-94ff-906cb974bbf2-kube-api-access-5gvbb\") pod \"coredns-674b8bbfcf-bpmv2\" (UID: \"c29fe4b2-bccb-43ac-94ff-906cb974bbf2\") " pod="kube-system/coredns-674b8bbfcf-bpmv2" Apr 14 13:32:47.461196 kubelet[2519]: I0414 13:32:47.456573 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a573c7f5-88e5-4897-8831-187a489d5981-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-lq4vt\" (UID: \"a573c7f5-88e5-4897-8831-187a489d5981\") " pod="calico-system/goldmane-5b85766d88-lq4vt" Apr 14 13:32:47.461196 kubelet[2519]: I0414 13:32:47.457144 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5wr\" (UniqueName: \"kubernetes.io/projected/a573c7f5-88e5-4897-8831-187a489d5981-kube-api-access-bv5wr\") pod \"goldmane-5b85766d88-lq4vt\" (UID: \"a573c7f5-88e5-4897-8831-187a489d5981\") " pod="calico-system/goldmane-5b85766d88-lq4vt" Apr 14 13:32:47.461031 systemd[1]: Created slice kubepods-burstable-podc29fe4b2_bccb_43ac_94ff_906cb974bbf2.slice - libcontainer container kubepods-burstable-podc29fe4b2_bccb_43ac_94ff_906cb974bbf2.slice. Apr 14 13:32:47.461652 kubelet[2519]: I0414 13:32:47.457417 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a573c7f5-88e5-4897-8831-187a489d5981-config\") pod \"goldmane-5b85766d88-lq4vt\" (UID: \"a573c7f5-88e5-4897-8831-187a489d5981\") " pod="calico-system/goldmane-5b85766d88-lq4vt" Apr 14 13:32:47.461652 kubelet[2519]: I0414 13:32:47.457448 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a573c7f5-88e5-4897-8831-187a489d5981-goldmane-key-pair\") pod \"goldmane-5b85766d88-lq4vt\" (UID: \"a573c7f5-88e5-4897-8831-187a489d5981\") " pod="calico-system/goldmane-5b85766d88-lq4vt" Apr 14 13:32:47.461652 kubelet[2519]: I0414 13:32:47.457464 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29fe4b2-bccb-43ac-94ff-906cb974bbf2-config-volume\") pod \"coredns-674b8bbfcf-bpmv2\" (UID: \"c29fe4b2-bccb-43ac-94ff-906cb974bbf2\") " pod="kube-system/coredns-674b8bbfcf-bpmv2" Apr 14 13:32:47.485562 systemd[1]: Created slice kubepods-burstable-podeec711c2_8d03_4974_9177_e6d5f178fa6e.slice - libcontainer container kubepods-burstable-podeec711c2_8d03_4974_9177_e6d5f178fa6e.slice. Apr 14 13:32:47.580325 kubelet[2519]: I0414 13:32:47.563644 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drhj\" (UniqueName: \"kubernetes.io/projected/c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7-kube-api-access-6drhj\") pod \"calico-apiserver-55877c889c-7wj62\" (UID: \"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7\") " pod="calico-system/calico-apiserver-55877c889c-7wj62" Apr 14 13:32:47.583930 kubelet[2519]: I0414 13:32:47.583879 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02259ab1-493b-4927-8c23-c062c006fdf7-calico-apiserver-certs\") pod \"calico-apiserver-55877c889c-n22g4\" (UID: \"02259ab1-493b-4927-8c23-c062c006fdf7\") " pod="calico-system/calico-apiserver-55877c889c-n22g4" Apr 14 13:32:47.583930 kubelet[2519]: I0414 13:32:47.583908 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmbp\" (UniqueName: \"kubernetes.io/projected/02259ab1-493b-4927-8c23-c062c006fdf7-kube-api-access-gtmbp\") pod \"calico-apiserver-55877c889c-n22g4\" (UID: \"02259ab1-493b-4927-8c23-c062c006fdf7\") " pod="calico-system/calico-apiserver-55877c889c-n22g4" Apr 14 13:32:47.584155 kubelet[2519]: I0414 13:32:47.583958 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6981126-7658-4757-a8d5-0d67c493dae2-tigera-ca-bundle\") pod \"calico-kube-controllers-6f5c776cfd-5dq8p\" (UID: \"a6981126-7658-4757-a8d5-0d67c493dae2\") " pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" Apr 14 13:32:47.584155 kubelet[2519]: I0414 13:32:47.583985 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7-calico-apiserver-certs\") pod \"calico-apiserver-55877c889c-7wj62\" (UID: \"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7\") " pod="calico-system/calico-apiserver-55877c889c-7wj62" Apr 14 13:32:47.584155 kubelet[2519]: I0414 13:32:47.584021 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdwh\" (UniqueName: \"kubernetes.io/projected/a6981126-7658-4757-a8d5-0d67c493dae2-kube-api-access-whdwh\") pod \"calico-kube-controllers-6f5c776cfd-5dq8p\" (UID: \"a6981126-7658-4757-a8d5-0d67c493dae2\") " pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" Apr 14 13:32:47.585763 containerd[1463]: time="2026-04-14T13:32:47.585279911Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 14 13:32:47.660373 systemd[1]: Created slice kubepods-besteffort-podc556a0c9_d9a1_4a5f_8f2f_a48e00e5d5e7.slice - libcontainer container kubepods-besteffort-podc556a0c9_d9a1_4a5f_8f2f_a48e00e5d5e7.slice. Apr 14 13:32:47.774910 containerd[1463]: time="2026-04-14T13:32:47.774431836Z" level=info msg="CreateContainer within sandbox \"b957b3d54e607de71711ce603d604ef6b7263f5d0af4910f39afd6a947bb2004\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93\"" Apr 14 13:32:47.803874 containerd[1463]: time="2026-04-14T13:32:47.798423939Z" level=info msg="StartContainer for \"7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93\"" Apr 14 13:32:47.895061 systemd[1]: Created slice kubepods-besteffort-pod02259ab1_493b_4927_8c23_c062c006fdf7.slice - libcontainer container kubepods-besteffort-pod02259ab1_493b_4927_8c23_c062c006fdf7.slice. Apr 14 13:32:47.969317 containerd[1463]: time="2026-04-14T13:32:47.969227483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56b974bcd6-psxbc,Uid:47415411-3815-4e70-b149-e05ad96c0a9d,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:48.014530 systemd[1]: Created slice kubepods-besteffort-poda6981126_7658_4757_a8d5_0d67c493dae2.slice - libcontainer container kubepods-besteffort-poda6981126_7658_4757_a8d5_0d67c493dae2.slice. Apr 14 13:32:48.042105 systemd[1]: Started cri-containerd-7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93.scope - libcontainer container 7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93. Apr 14 13:32:48.054179 containerd[1463]: time="2026-04-14T13:32:48.051262025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5c776cfd-5dq8p,Uid:a6981126-7658-4757-a8d5-0d67c493dae2,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:48.056397 systemd[1]: Created slice kubepods-besteffort-podc2556d03_7ce4_4031_9834_67fb67a536f0.slice - libcontainer container kubepods-besteffort-podc2556d03_7ce4_4031_9834_67fb67a536f0.slice. Apr 14 13:32:48.084468 kubelet[2519]: E0414 13:32:48.084365 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:48.087892 containerd[1463]: time="2026-04-14T13:32:48.087512143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-7wj62,Uid:c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:48.087892 containerd[1463]: time="2026-04-14T13:32:48.087729826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpmv2,Uid:c29fe4b2-bccb-43ac-94ff-906cb974bbf2,Namespace:kube-system,Attempt:0,}" Apr 14 13:32:48.132162 containerd[1463]: time="2026-04-14T13:32:48.129267413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cps8s,Uid:c2556d03-7ce4-4031-9834-67fb67a536f0,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:48.263559 kubelet[2519]: E0414 13:32:48.257053 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:48.362489 containerd[1463]: time="2026-04-14T13:32:48.301509835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t4bm4,Uid:eec711c2-8d03-4974-9177-e6d5f178fa6e,Namespace:kube-system,Attempt:0,}" Apr 14 13:32:48.384021 containerd[1463]: time="2026-04-14T13:32:48.380151345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-n22g4,Uid:02259ab1-493b-4927-8c23-c062c006fdf7,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:48.384021 containerd[1463]: time="2026-04-14T13:32:48.380960877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lq4vt,Uid:a573c7f5-88e5-4897-8831-187a489d5981,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:48.580541 containerd[1463]: time="2026-04-14T13:32:48.577657113Z" level=info msg="StartContainer for \"7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93\" returns successfully" Apr 14 13:32:49.719159 containerd[1463]: time="2026-04-14T13:32:49.719019624Z" level=error msg="Failed to destroy network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.722982 containerd[1463]: time="2026-04-14T13:32:49.722665722Z" level=error msg="encountered an error cleaning up failed sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.723256 containerd[1463]: time="2026-04-14T13:32:49.723233370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-7wj62,Uid:c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.724538 containerd[1463]: time="2026-04-14T13:32:49.724195438Z" level=error msg="Failed to destroy network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.724597 containerd[1463]: time="2026-04-14T13:32:49.724569256Z" level=error msg="encountered an error cleaning up failed sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.724654 containerd[1463]: time="2026-04-14T13:32:49.724611824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5c776cfd-5dq8p,Uid:a6981126-7658-4757-a8d5-0d67c493dae2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.736480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319-shm.mount: Deactivated successfully. Apr 14 13:32:49.736649 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86-shm.mount: Deactivated successfully. Apr 14 13:32:49.764037 kubelet[2519]: E0414 13:32:49.763972 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.764513 kubelet[2519]: E0414 13:32:49.764143 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.764513 kubelet[2519]: E0414 13:32:49.764233 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55877c889c-7wj62" Apr 14 13:32:49.764513 kubelet[2519]: E0414 13:32:49.764272 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55877c889c-7wj62" Apr 14 13:32:49.764580 kubelet[2519]: E0414 13:32:49.764443 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55877c889c-7wj62_calico-system(c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55877c889c-7wj62_calico-system(c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55877c889c-7wj62" podUID="c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7" Apr 14 13:32:49.766369 kubelet[2519]: E0414 13:32:49.764106 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" Apr 14 13:32:49.772102 kubelet[2519]: E0414 13:32:49.765471 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" Apr 14 13:32:49.775498 kubelet[2519]: E0414 13:32:49.773418 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f5c776cfd-5dq8p_calico-system(a6981126-7658-4757-a8d5-0d67c493dae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f5c776cfd-5dq8p_calico-system(a6981126-7658-4757-a8d5-0d67c493dae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" podUID="a6981126-7658-4757-a8d5-0d67c493dae2" Apr 14 13:32:49.871148 containerd[1463]: time="2026-04-14T13:32:49.870777745Z" level=error msg="Failed to destroy network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.872568 containerd[1463]: time="2026-04-14T13:32:49.872431584Z" level=error msg="encountered an error cleaning up failed sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.872786 containerd[1463]: time="2026-04-14T13:32:49.872620825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-n22g4,Uid:02259ab1-493b-4927-8c23-c062c006fdf7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.889995 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1-shm.mount: Deactivated successfully. Apr 14 13:32:49.894998 kubelet[2519]: E0414 13:32:49.891506 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.894998 kubelet[2519]: E0414 13:32:49.891848 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55877c889c-n22g4" Apr 14 13:32:49.894998 kubelet[2519]: E0414 13:32:49.892177 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55877c889c-n22g4" Apr 14 13:32:49.895104 kubelet[2519]: E0414 13:32:49.892480 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55877c889c-n22g4_calico-system(02259ab1-493b-4927-8c23-c062c006fdf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55877c889c-n22g4_calico-system(02259ab1-493b-4927-8c23-c062c006fdf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55877c889c-n22g4" podUID="02259ab1-493b-4927-8c23-c062c006fdf7" Apr 14 13:32:49.943483 containerd[1463]: time="2026-04-14T13:32:49.943247743Z" level=error msg="Failed to destroy network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.944209 containerd[1463]: time="2026-04-14T13:32:49.944094252Z" level=error msg="encountered an error cleaning up failed sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.944209 containerd[1463]: time="2026-04-14T13:32:49.944190492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56b974bcd6-psxbc,Uid:47415411-3815-4e70-b149-e05ad96c0a9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.948181 kubelet[2519]: E0414 13:32:49.947708 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:49.948312 kubelet[2519]: E0414 13:32:49.948218 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56b974bcd6-psxbc" Apr 14 13:32:49.948312 kubelet[2519]: E0414 13:32:49.948245 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56b974bcd6-psxbc" Apr 14 13:32:49.948427 kubelet[2519]: E0414 13:32:49.948374 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56b974bcd6-psxbc_calico-system(47415411-3815-4e70-b149-e05ad96c0a9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56b974bcd6-psxbc_calico-system(47415411-3815-4e70-b149-e05ad96c0a9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56b974bcd6-psxbc" podUID="47415411-3815-4e70-b149-e05ad96c0a9d" Apr 14 13:32:49.950493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8-shm.mount: Deactivated successfully. Apr 14 13:32:49.977706 containerd[1463]: time="2026-04-14T13:32:49.974895098Z" level=error msg="Failed to destroy network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.002546 containerd[1463]: time="2026-04-14T13:32:50.001453723Z" level=error msg="encountered an error cleaning up failed sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.013439 containerd[1463]: time="2026-04-14T13:32:50.004006212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t4bm4,Uid:eec711c2-8d03-4974-9177-e6d5f178fa6e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.025953 kubelet[2519]: E0414 13:32:50.024437 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.025953 kubelet[2519]: E0414 13:32:50.024576 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t4bm4" Apr 14 13:32:50.025953 kubelet[2519]: E0414 13:32:50.024611 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t4bm4" Apr 14 13:32:50.027259 kubelet[2519]: E0414 13:32:50.024707 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t4bm4_kube-system(eec711c2-8d03-4974-9177-e6d5f178fa6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t4bm4_kube-system(eec711c2-8d03-4974-9177-e6d5f178fa6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t4bm4" podUID="eec711c2-8d03-4974-9177-e6d5f178fa6e" Apr 14 13:32:50.099132 containerd[1463]: time="2026-04-14T13:32:50.098472038Z" level=error msg="Failed to destroy network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.122725 kubelet[2519]: I0414 13:32:50.101619 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:32:50.122725 kubelet[2519]: E0414 13:32:50.121097 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.122725 kubelet[2519]: E0414 13:32:50.122165 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:50.122725 kubelet[2519]: E0414 13:32:50.122363 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cps8s" Apr 14 13:32:50.123208 kubelet[2519]: E0414 13:32:50.122503 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cps8s_calico-system(c2556d03-7ce4-4031-9834-67fb67a536f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cps8s_calico-system(c2556d03-7ce4-4031-9834-67fb67a536f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:50.123300 containerd[1463]: time="2026-04-14T13:32:50.118799855Z" level=error msg="encountered an error cleaning up failed sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.123300 containerd[1463]: time="2026-04-14T13:32:50.119159187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cps8s,Uid:c2556d03-7ce4-4031-9834-67fb67a536f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.134684 kubelet[2519]: I0414 13:32:50.134327 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:32:50.148972 containerd[1463]: time="2026-04-14T13:32:50.148885086Z" level=error msg="Failed to destroy network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.151518 containerd[1463]: time="2026-04-14T13:32:50.151390961Z" level=error msg="encountered an error cleaning up failed sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.154525 containerd[1463]: time="2026-04-14T13:32:50.152561493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lq4vt,Uid:a573c7f5-88e5-4897-8831-187a489d5981,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.156162 kubelet[2519]: E0414 13:32:50.155780 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.156162 kubelet[2519]: E0414 13:32:50.155904 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-lq4vt" Apr 14 13:32:50.158694 kubelet[2519]: E0414 13:32:50.157294 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-lq4vt" Apr 14 13:32:50.158694 kubelet[2519]: E0414 13:32:50.157797 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-lq4vt_calico-system(a573c7f5-88e5-4897-8831-187a489d5981)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-lq4vt_calico-system(a573c7f5-88e5-4897-8831-187a489d5981)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-lq4vt" podUID="a573c7f5-88e5-4897-8831-187a489d5981" Apr 14 13:32:50.183894 containerd[1463]: time="2026-04-14T13:32:50.182983755Z" level=info msg="StopPodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\"" Apr 14 13:32:50.183894 containerd[1463]: time="2026-04-14T13:32:50.183714183Z" level=info msg="StopPodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\"" Apr 14 13:32:50.201199 containerd[1463]: time="2026-04-14T13:32:50.200031337Z" level=info msg="Ensure that sandbox 9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8 in task-service has been cleanup successfully" Apr 14 13:32:50.259111 containerd[1463]: time="2026-04-14T13:32:50.258489744Z" level=info msg="Ensure that sandbox e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1 in task-service has been cleanup successfully" Apr 14 13:32:50.313374 containerd[1463]: time="2026-04-14T13:32:50.313224327Z" level=error msg="Failed to destroy network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.314519 containerd[1463]: time="2026-04-14T13:32:50.314253455Z" level=error msg="encountered an error cleaning up failed sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.316041 containerd[1463]: time="2026-04-14T13:32:50.314585567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpmv2,Uid:c29fe4b2-bccb-43ac-94ff-906cb974bbf2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.350880 kubelet[2519]: E0414 13:32:50.344294 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.350880 kubelet[2519]: E0414 13:32:50.344344 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpmv2" Apr 14 13:32:50.350880 kubelet[2519]: E0414 13:32:50.344361 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpmv2" Apr 14 13:32:50.351100 kubelet[2519]: E0414 13:32:50.344428 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bpmv2_kube-system(c29fe4b2-bccb-43ac-94ff-906cb974bbf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bpmv2_kube-system(c29fe4b2-bccb-43ac-94ff-906cb974bbf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bpmv2" podUID="c29fe4b2-bccb-43ac-94ff-906cb974bbf2" Apr 14 13:32:50.459904 kubelet[2519]: I0414 13:32:50.457970 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:32:50.471598 containerd[1463]: time="2026-04-14T13:32:50.471207733Z" level=info msg="StopPodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\"" Apr 14 13:32:50.472853 containerd[1463]: time="2026-04-14T13:32:50.472750522Z" level=info msg="Ensure that sandbox b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319 in task-service has been cleanup successfully" Apr 14 13:32:50.519751 kubelet[2519]: I0414 13:32:50.519674 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:32:50.535256 containerd[1463]: time="2026-04-14T13:32:50.535165348Z" level=info msg="StopPodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\"" Apr 14 13:32:50.535555 containerd[1463]: time="2026-04-14T13:32:50.535455209Z" level=info msg="Ensure that sandbox 249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86 in task-service has been cleanup successfully" Apr 14 13:32:50.625194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8-shm.mount: Deactivated successfully. Apr 14 13:32:50.626705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af-shm.mount: Deactivated successfully. Apr 14 13:32:50.627340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58-shm.mount: Deactivated successfully. Apr 14 13:32:50.627459 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc-shm.mount: Deactivated successfully. Apr 14 13:32:50.701295 containerd[1463]: time="2026-04-14T13:32:50.701244959Z" level=error msg="StopPodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" failed" error="failed to destroy network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.733292 kubelet[2519]: E0414 13:32:50.732983 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:32:50.733466 kubelet[2519]: E0414 13:32:50.733333 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8"} Apr 14 13:32:50.736056 kubelet[2519]: E0414 13:32:50.735784 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47415411-3815-4e70-b149-e05ad96c0a9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:50.737047 kubelet[2519]: E0414 13:32:50.736983 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47415411-3815-4e70-b149-e05ad96c0a9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56b974bcd6-psxbc" podUID="47415411-3815-4e70-b149-e05ad96c0a9d" Apr 14 13:32:50.777258 containerd[1463]: time="2026-04-14T13:32:50.777034090Z" level=error msg="StopPodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" failed" error="failed to destroy network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.783891 kubelet[2519]: E0414 13:32:50.778713 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:32:50.785929 kubelet[2519]: E0414 13:32:50.783102 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319"} Apr 14 13:32:50.787438 kubelet[2519]: E0414 13:32:50.787369 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:50.790061 kubelet[2519]: E0414 13:32:50.787583 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55877c889c-7wj62" podUID="c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7" Apr 14 13:32:50.888948 containerd[1463]: time="2026-04-14T13:32:50.866389955Z" level=error msg="StopPodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" failed" error="failed to destroy network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.930470 kubelet[2519]: E0414 13:32:50.930311 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:32:50.932714 kubelet[2519]: E0414 13:32:50.932570 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86"} Apr 14 13:32:50.933061 kubelet[2519]: E0414 13:32:50.933043 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6981126-7658-4757-a8d5-0d67c493dae2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:50.937524 kubelet[2519]: E0414 13:32:50.934675 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6981126-7658-4757-a8d5-0d67c493dae2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" podUID="a6981126-7658-4757-a8d5-0d67c493dae2" Apr 14 13:32:50.945194 containerd[1463]: time="2026-04-14T13:32:50.945057161Z" level=error msg="StopPodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" failed" error="failed to destroy network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:50.947861 kubelet[2519]: E0414 13:32:50.946447 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:32:50.947861 kubelet[2519]: E0414 13:32:50.946636 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1"} Apr 14 13:32:50.947861 kubelet[2519]: E0414 13:32:50.946723 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02259ab1-493b-4927-8c23-c062c006fdf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:50.947861 kubelet[2519]: E0414 13:32:50.947453 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02259ab1-493b-4927-8c23-c062c006fdf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55877c889c-n22g4" podUID="02259ab1-493b-4927-8c23-c062c006fdf7" Apr 14 13:32:51.047497 kubelet[2519]: I0414 13:32:51.047289 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p27zw" podStartSLOduration=8.476988969 podStartE2EDuration="36.047267677s" podCreationTimestamp="2026-04-14 13:32:15 +0000 UTC" firstStartedPulling="2026-04-14 13:32:16.385643851 +0000 UTC m=+25.003258425" lastFinishedPulling="2026-04-14 13:32:43.955922564 +0000 UTC m=+52.573537133" observedRunningTime="2026-04-14 13:32:51.046986486 +0000 UTC m=+59.664601065" watchObservedRunningTime="2026-04-14 13:32:51.047267677 +0000 UTC m=+59.664882254" Apr 14 13:32:51.368088 kubelet[2519]: E0414 13:32:51.366688 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:51.633003 containerd[1463]: time="2026-04-14T13:32:51.632646263Z" level=info msg="StopPodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\"" Apr 14 13:32:51.633003 containerd[1463]: time="2026-04-14T13:32:51.632863077Z" level=info msg="Ensure that sandbox cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58 in task-service has been cleanup successfully" Apr 14 13:32:51.637939 kubelet[2519]: I0414 13:32:51.629625 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:32:51.664632 kubelet[2519]: I0414 13:32:51.657236 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:32:51.803085 containerd[1463]: time="2026-04-14T13:32:51.799327427Z" level=info msg="StopPodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\"" Apr 14 13:32:51.803085 containerd[1463]: time="2026-04-14T13:32:51.799671837Z" level=info msg="Ensure that sandbox f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af in task-service has been cleanup successfully" Apr 14 13:32:51.915562 containerd[1463]: time="2026-04-14T13:32:51.915138482Z" level=error msg="StopPodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" failed" error="failed to destroy network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:51.926738 kubelet[2519]: E0414 13:32:51.926617 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:32:51.926738 kubelet[2519]: E0414 13:32:51.926698 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58"} Apr 14 13:32:51.927436 kubelet[2519]: E0414 13:32:51.926750 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2556d03-7ce4-4031-9834-67fb67a536f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:51.971204 containerd[1463]: time="2026-04-14T13:32:51.971104290Z" level=error msg="StopPodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" failed" error="failed to destroy network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:51.976366 kubelet[2519]: E0414 13:32:51.974397 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:32:51.976366 kubelet[2519]: E0414 13:32:51.974638 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af"} Apr 14 13:32:51.976366 kubelet[2519]: E0414 13:32:51.974726 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eec711c2-8d03-4974-9177-e6d5f178fa6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:51.981122 kubelet[2519]: E0414 13:32:51.974787 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eec711c2-8d03-4974-9177-e6d5f178fa6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t4bm4" podUID="eec711c2-8d03-4974-9177-e6d5f178fa6e" Apr 14 13:32:51.997400 kubelet[2519]: I0414 13:32:51.997318 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:32:51.997647 kubelet[2519]: E0414 13:32:51.990382 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2556d03-7ce4-4031-9834-67fb67a536f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cps8s" podUID="c2556d03-7ce4-4031-9834-67fb67a536f0" Apr 14 13:32:51.998308 containerd[1463]: time="2026-04-14T13:32:51.998257344Z" level=info msg="StopPodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\"" Apr 14 13:32:51.998467 containerd[1463]: time="2026-04-14T13:32:51.998455015Z" level=info msg="Ensure that sandbox 6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8 in task-service has been cleanup successfully" Apr 14 13:32:52.095660 kubelet[2519]: I0414 13:32:52.095546 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:32:52.105858 containerd[1463]: time="2026-04-14T13:32:52.104896670Z" level=info msg="StopPodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\"" Apr 14 13:32:52.105858 containerd[1463]: time="2026-04-14T13:32:52.105216721Z" level=info msg="Ensure that sandbox d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc in task-service has been cleanup successfully" Apr 14 13:32:52.373729 containerd[1463]: time="2026-04-14T13:32:52.372425943Z" level=error msg="StopPodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" failed" error="failed to destroy network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:52.374369 kubelet[2519]: E0414 13:32:52.373173 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:32:52.374369 kubelet[2519]: E0414 13:32:52.374155 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc"} Apr 14 13:32:52.374369 kubelet[2519]: E0414 13:32:52.374190 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c29fe4b2-bccb-43ac-94ff-906cb974bbf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:52.374369 kubelet[2519]: E0414 13:32:52.374210 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c29fe4b2-bccb-43ac-94ff-906cb974bbf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bpmv2" podUID="c29fe4b2-bccb-43ac-94ff-906cb974bbf2" Apr 14 13:32:52.379647 containerd[1463]: time="2026-04-14T13:32:52.379533668Z" level=error msg="StopPodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" failed" error="failed to destroy network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:32:52.385781 kubelet[2519]: E0414 13:32:52.385435 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:32:52.389099 kubelet[2519]: E0414 13:32:52.385903 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8"} Apr 14 13:32:52.389099 kubelet[2519]: E0414 13:32:52.386085 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a573c7f5-88e5-4897-8831-187a489d5981\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 13:32:52.389099 kubelet[2519]: E0414 13:32:52.386115 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a573c7f5-88e5-4897-8831-187a489d5981\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-lq4vt" podUID="a573c7f5-88e5-4897-8831-187a489d5981" Apr 14 13:32:52.768164 containerd[1463]: time="2026-04-14T13:32:52.762760256Z" level=info msg="StopPodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\"" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.338 [INFO][3987] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.339 [INFO][3987] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" iface="eth0" netns="/var/run/netns/cni-13154c7f-535e-078b-19b9-87832389c6d4" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.339 [INFO][3987] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" iface="eth0" netns="/var/run/netns/cni-13154c7f-535e-078b-19b9-87832389c6d4" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.340 [INFO][3987] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" iface="eth0" netns="/var/run/netns/cni-13154c7f-535e-078b-19b9-87832389c6d4" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.340 [INFO][3987] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.340 [INFO][3987] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.411 [INFO][3998] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.416 [INFO][3998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.419 [INFO][3998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.457 [WARNING][3998] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.457 [INFO][3998] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.477 [INFO][3998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:32:53.486022 containerd[1463]: 2026-04-14 13:32:53.483 [INFO][3987] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:32:53.488644 containerd[1463]: time="2026-04-14T13:32:53.487034737Z" level=info msg="TearDown network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" successfully" Apr 14 13:32:53.488644 containerd[1463]: time="2026-04-14T13:32:53.487068924Z" level=info msg="StopPodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" returns successfully" Apr 14 13:32:53.534714 systemd[1]: run-netns-cni\x2d13154c7f\x2d535e\x2d078b\x2d19b9\x2d87832389c6d4.mount: Deactivated successfully. Apr 14 13:32:53.745464 kubelet[2519]: I0414 13:32:53.742714 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-nginx-config\") pod \"47415411-3815-4e70-b149-e05ad96c0a9d\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " Apr 14 13:32:53.745464 kubelet[2519]: I0414 13:32:53.743747 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "47415411-3815-4e70-b149-e05ad96c0a9d" (UID: "47415411-3815-4e70-b149-e05ad96c0a9d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 13:32:53.752762 kubelet[2519]: I0414 13:32:53.750753 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tffj\" (UniqueName: \"kubernetes.io/projected/47415411-3815-4e70-b149-e05ad96c0a9d-kube-api-access-6tffj\") pod \"47415411-3815-4e70-b149-e05ad96c0a9d\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " Apr 14 13:32:53.752762 kubelet[2519]: I0414 13:32:53.751612 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-backend-key-pair\") pod \"47415411-3815-4e70-b149-e05ad96c0a9d\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " Apr 14 13:32:53.752762 kubelet[2519]: I0414 13:32:53.751719 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-ca-bundle\") pod \"47415411-3815-4e70-b149-e05ad96c0a9d\" (UID: \"47415411-3815-4e70-b149-e05ad96c0a9d\") " Apr 14 13:32:53.752762 kubelet[2519]: I0414 13:32:53.752050 2519 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 14 13:32:53.752762 kubelet[2519]: I0414 13:32:53.752663 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "47415411-3815-4e70-b149-e05ad96c0a9d" (UID: "47415411-3815-4e70-b149-e05ad96c0a9d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 13:32:53.765117 kubelet[2519]: I0414 13:32:53.758740 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47415411-3815-4e70-b149-e05ad96c0a9d-kube-api-access-6tffj" (OuterVolumeSpecName: "kube-api-access-6tffj") pod "47415411-3815-4e70-b149-e05ad96c0a9d" (UID: "47415411-3815-4e70-b149-e05ad96c0a9d"). InnerVolumeSpecName "kube-api-access-6tffj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 13:32:53.767460 systemd[1]: var-lib-kubelet-pods-47415411\x2d3815\x2d4e70\x2db149\x2de05ad96c0a9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6tffj.mount: Deactivated successfully. Apr 14 13:32:53.772611 kubelet[2519]: I0414 13:32:53.772510 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "47415411-3815-4e70-b149-e05ad96c0a9d" (UID: "47415411-3815-4e70-b149-e05ad96c0a9d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 13:32:53.773166 systemd[1]: var-lib-kubelet-pods-47415411\x2d3815\x2d4e70\x2db149\x2de05ad96c0a9d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 14 13:32:53.859504 kubelet[2519]: I0414 13:32:53.857770 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 14 13:32:53.861109 kubelet[2519]: I0414 13:32:53.860538 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47415411-3815-4e70-b149-e05ad96c0a9d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 14 13:32:53.861109 kubelet[2519]: I0414 13:32:53.860690 2519 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6tffj\" (UniqueName: \"kubernetes.io/projected/47415411-3815-4e70-b149-e05ad96c0a9d-kube-api-access-6tffj\") on node \"localhost\" DevicePath \"\"" Apr 14 13:32:54.212687 systemd[1]: Removed slice kubepods-besteffort-pod47415411_3815_4e70_b149_e05ad96c0a9d.slice - libcontainer container kubepods-besteffort-pod47415411_3815_4e70_b149_e05ad96c0a9d.slice. Apr 14 13:32:54.949513 kubelet[2519]: I0414 13:32:54.948426 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d-whisker-ca-bundle\") pod \"whisker-897c476d7-55dwk\" (UID: \"757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d\") " pod="calico-system/whisker-897c476d7-55dwk" Apr 14 13:32:54.949513 kubelet[2519]: I0414 13:32:54.948528 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d-nginx-config\") pod \"whisker-897c476d7-55dwk\" (UID: \"757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d\") " pod="calico-system/whisker-897c476d7-55dwk" Apr 14 13:32:54.949513 kubelet[2519]: I0414 13:32:54.948563 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ncbj\" (UniqueName: \"kubernetes.io/projected/757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d-kube-api-access-7ncbj\") pod \"whisker-897c476d7-55dwk\" (UID: \"757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d\") " pod="calico-system/whisker-897c476d7-55dwk" Apr 14 13:32:54.956678 kubelet[2519]: I0414 13:32:54.950879 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d-whisker-backend-key-pair\") pod \"whisker-897c476d7-55dwk\" (UID: \"757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d\") " pod="calico-system/whisker-897c476d7-55dwk" Apr 14 13:32:54.970565 systemd[1]: Created slice kubepods-besteffort-pod757462a9_b1aa_4309_b9d1_e8f5b0aa9e7d.slice - libcontainer container kubepods-besteffort-pod757462a9_b1aa_4309_b9d1_e8f5b0aa9e7d.slice. Apr 14 13:32:55.347232 containerd[1463]: time="2026-04-14T13:32:55.346863619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-897c476d7-55dwk,Uid:757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d,Namespace:calico-system,Attempt:0,}" Apr 14 13:32:55.813909 kubelet[2519]: I0414 13:32:55.812496 2519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47415411-3815-4e70-b149-e05ad96c0a9d" path="/var/lib/kubelet/pods/47415411-3815-4e70-b149-e05ad96c0a9d/volumes" Apr 14 13:32:56.354392 systemd-networkd[1400]: calic10ffa0f14a: Link UP Apr 14 13:32:56.355171 systemd-networkd[1400]: calic10ffa0f14a: Gained carrier Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.468 [ERROR][4018] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.644 [INFO][4018] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--897c476d7--55dwk-eth0 whisker-897c476d7- calico-system 757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d 1043 0 2026-04-14 13:32:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:897c476d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-897c476d7-55dwk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic10ffa0f14a [] [] }} ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.644 [INFO][4018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.807 [INFO][4035] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" HandleID="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Workload="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.870 [INFO][4035] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" HandleID="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Workload="localhost-k8s-whisker--897c476d7--55dwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-897c476d7-55dwk", "timestamp":"2026-04-14 13:32:55.807649858 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004f91e0)} Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.870 [INFO][4035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.870 [INFO][4035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.870 [INFO][4035] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:55.992 [INFO][4035] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.051 [INFO][4035] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.131 [INFO][4035] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.162 [INFO][4035] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.244 [INFO][4035] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.245 [INFO][4035] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.277 [INFO][4035] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.303 [INFO][4035] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.333 [INFO][4035] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.333 [INFO][4035] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" host="localhost" Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.333 [INFO][4035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:32:56.539928 containerd[1463]: 2026-04-14 13:32:56.333 [INFO][4035] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" HandleID="k8s-pod-network.162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Workload="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.541684 containerd[1463]: 2026-04-14 13:32:56.337 [INFO][4018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--897c476d7--55dwk-eth0", GenerateName:"whisker-897c476d7-", Namespace:"calico-system", SelfLink:"", UID:"757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"897c476d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-897c476d7-55dwk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic10ffa0f14a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:32:56.541684 containerd[1463]: 2026-04-14 13:32:56.338 [INFO][4018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.541684 containerd[1463]: 2026-04-14 13:32:56.338 [INFO][4018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic10ffa0f14a ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.541684 containerd[1463]: 2026-04-14 13:32:56.363 [INFO][4018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.541684 containerd[1463]: 2026-04-14 13:32:56.365 [INFO][4018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--897c476d7--55dwk-eth0", GenerateName:"whisker-897c476d7-", Namespace:"calico-system", SelfLink:"", UID:"757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"897c476d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a", Pod:"whisker-897c476d7-55dwk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic10ffa0f14a", MAC:"e6:67:7d:da:d2:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:32:56.541684 containerd[1463]: 2026-04-14 13:32:56.527 [INFO][4018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a" Namespace="calico-system" Pod="whisker-897c476d7-55dwk" WorkloadEndpoint="localhost-k8s-whisker--897c476d7--55dwk-eth0" Apr 14 13:32:56.717162 containerd[1463]: time="2026-04-14T13:32:56.715345338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:32:56.718595 containerd[1463]: time="2026-04-14T13:32:56.717123556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:32:56.718786 containerd[1463]: time="2026-04-14T13:32:56.718565426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:56.719470 containerd[1463]: time="2026-04-14T13:32:56.719256290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:32:56.770668 systemd[1]: run-containerd-runc-k8s.io-162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a-runc.MwOcI3.mount: Deactivated successfully. Apr 14 13:32:56.781332 systemd[1]: Started cri-containerd-162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a.scope - libcontainer container 162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a. Apr 14 13:32:56.982576 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:32:57.080586 containerd[1463]: time="2026-04-14T13:32:57.080522029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-897c476d7-55dwk,Uid:757462a9-b1aa-4309-b9d1-e8f5b0aa9e7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a\"" Apr 14 13:32:57.150853 containerd[1463]: time="2026-04-14T13:32:57.150543085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 14 13:32:57.604449 systemd-networkd[1400]: calic10ffa0f14a: Gained IPv6LL Apr 14 13:32:58.108109 kernel: calico-node[4131]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 14 13:32:58.763243 kubelet[2519]: E0414 13:32:58.762922 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:32:59.096604 systemd-networkd[1400]: vxlan.calico: Link UP Apr 14 13:32:59.096613 systemd-networkd[1400]: vxlan.calico: Gained carrier Apr 14 13:32:59.647542 containerd[1463]: time="2026-04-14T13:32:59.645068546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:59.648322 containerd[1463]: time="2026-04-14T13:32:59.648072022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 14 13:32:59.650507 containerd[1463]: time="2026-04-14T13:32:59.650409548Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:59.661527 containerd[1463]: time="2026-04-14T13:32:59.661039062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:32:59.675752 containerd[1463]: time="2026-04-14T13:32:59.675439061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.524854339s" Apr 14 13:32:59.675752 containerd[1463]: time="2026-04-14T13:32:59.675722126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 14 13:32:59.703455 containerd[1463]: time="2026-04-14T13:32:59.701741638Z" level=info msg="CreateContainer within sandbox \"162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 14 13:32:59.757139 containerd[1463]: time="2026-04-14T13:32:59.757068430Z" level=info msg="CreateContainer within sandbox \"162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d002c3de517679e83ccdd0f22cb10de7d878f3c7f81661cb4ac84ad3c12efea0\"" Apr 14 13:32:59.762113 containerd[1463]: time="2026-04-14T13:32:59.762004458Z" level=info msg="StartContainer for \"d002c3de517679e83ccdd0f22cb10de7d878f3c7f81661cb4ac84ad3c12efea0\"" Apr 14 13:32:59.946339 systemd[1]: Started cri-containerd-d002c3de517679e83ccdd0f22cb10de7d878f3c7f81661cb4ac84ad3c12efea0.scope - libcontainer container d002c3de517679e83ccdd0f22cb10de7d878f3c7f81661cb4ac84ad3c12efea0. Apr 14 13:33:00.191783 containerd[1463]: time="2026-04-14T13:33:00.191443145Z" level=info msg="StartContainer for \"d002c3de517679e83ccdd0f22cb10de7d878f3c7f81661cb4ac84ad3c12efea0\" returns successfully" Apr 14 13:33:00.204609 containerd[1463]: time="2026-04-14T13:33:00.204039636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 14 13:33:00.225634 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Apr 14 13:33:01.769034 containerd[1463]: time="2026-04-14T13:33:01.768906513Z" level=info msg="StopPodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\"" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.125 [INFO][4382] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.126 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" iface="eth0" netns="/var/run/netns/cni-349ff425-d44b-6f98-fb89-4b676f257da9" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.126 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" iface="eth0" netns="/var/run/netns/cni-349ff425-d44b-6f98-fb89-4b676f257da9" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.126 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" iface="eth0" netns="/var/run/netns/cni-349ff425-d44b-6f98-fb89-4b676f257da9" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.126 [INFO][4382] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.126 [INFO][4382] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.256 [INFO][4391] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.256 [INFO][4391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.256 [INFO][4391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.294 [WARNING][4391] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.294 [INFO][4391] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.362 [INFO][4391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:02.382317 containerd[1463]: 2026-04-14 13:33:02.369 [INFO][4382] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:02.385550 containerd[1463]: time="2026-04-14T13:33:02.385426204Z" level=info msg="TearDown network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" successfully" Apr 14 13:33:02.385550 containerd[1463]: time="2026-04-14T13:33:02.385509171Z" level=info msg="StopPodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" returns successfully" Apr 14 13:33:02.397377 containerd[1463]: time="2026-04-14T13:33:02.397260315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-7wj62,Uid:c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:02.398451 systemd[1]: run-netns-cni\x2d349ff425\x2dd44b\x2d6f98\x2dfb89\x2d4b676f257da9.mount: Deactivated successfully. Apr 14 13:33:02.774666 containerd[1463]: time="2026-04-14T13:33:02.774293245Z" level=info msg="StopPodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\"" Apr 14 13:33:02.775701 containerd[1463]: time="2026-04-14T13:33:02.774665521Z" level=info msg="StopPodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\"" Apr 14 13:33:02.784900 containerd[1463]: time="2026-04-14T13:33:02.783639258Z" level=info msg="StopPodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\"" Apr 14 13:33:03.761042 systemd-networkd[1400]: califb4966f5b8a: Link UP Apr 14 13:33:03.765456 systemd-networkd[1400]: califb4966f5b8a: Gained carrier Apr 14 13:33:03.794716 containerd[1463]: time="2026-04-14T13:33:03.793495948Z" level=info msg="StopPodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\"" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.275 [INFO][4447] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.275 [INFO][4447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" iface="eth0" netns="/var/run/netns/cni-322cb9e3-8533-eca0-d44d-68775a2c1563" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.275 [INFO][4447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" iface="eth0" netns="/var/run/netns/cni-322cb9e3-8533-eca0-d44d-68775a2c1563" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.276 [INFO][4447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" iface="eth0" netns="/var/run/netns/cni-322cb9e3-8533-eca0-d44d-68775a2c1563" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.276 [INFO][4447] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.276 [INFO][4447] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.397 [INFO][4478] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.397 [INFO][4478] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.749 [INFO][4478] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.892 [WARNING][4478] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:03.893 [INFO][4478] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:04.052 [INFO][4478] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:04.100303 containerd[1463]: 2026-04-14 13:33:04.082 [INFO][4447] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:04.144559 containerd[1463]: time="2026-04-14T13:33:04.140060007Z" level=info msg="TearDown network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" successfully" Apr 14 13:33:04.144559 containerd[1463]: time="2026-04-14T13:33:04.140401913Z" level=info msg="StopPodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" returns successfully" Apr 14 13:33:04.146133 containerd[1463]: time="2026-04-14T13:33:04.146008522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cps8s,Uid:c2556d03-7ce4-4031-9834-67fb67a536f0,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:04.155513 systemd[1]: run-netns-cni\x2d322cb9e3\x2d8533\x2deca0\x2dd44d\x2d68775a2c1563.mount: Deactivated successfully. Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:02.793 [INFO][4399] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0 calico-apiserver-55877c889c- calico-system c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7 1067 0 2026-04-14 13:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55877c889c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55877c889c-7wj62 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califb4966f5b8a [] [] }} ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:02.794 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.130 [INFO][4459] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" HandleID="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.170 [INFO][4459] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" HandleID="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-55877c889c-7wj62", "timestamp":"2026-04-14 13:33:03.130618565 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000198580)} Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.170 [INFO][4459] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.170 [INFO][4459] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.170 [INFO][4459] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.265 [INFO][4459] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.344 [INFO][4459] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.457 [INFO][4459] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.490 [INFO][4459] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.552 [INFO][4459] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.552 [INFO][4459] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.598 [INFO][4459] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2 Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.650 [INFO][4459] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.749 [INFO][4459] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.749 [INFO][4459] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" host="localhost" Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.749 [INFO][4459] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:04.172886 containerd[1463]: 2026-04-14 13:33:03.749 [INFO][4459] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" HandleID="k8s-pod-network.f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.173829 containerd[1463]: 2026-04-14 13:33:03.755 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55877c889c-7wj62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb4966f5b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:04.173829 containerd[1463]: 2026-04-14 13:33:03.755 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.173829 containerd[1463]: 2026-04-14 13:33:03.755 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb4966f5b8a ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.173829 containerd[1463]: 2026-04-14 13:33:03.768 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.173829 containerd[1463]: 2026-04-14 13:33:03.776 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2", Pod:"calico-apiserver-55877c889c-7wj62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb4966f5b8a", MAC:"e2:54:8e:a4:27:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:04.173829 containerd[1463]: 2026-04-14 13:33:04.156 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2" Namespace="calico-system" Pod="calico-apiserver-55877c889c-7wj62" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.325 [INFO][4434] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.326 [INFO][4434] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" iface="eth0" netns="/var/run/netns/cni-8a087ee9-ae7a-37f9-b475-5919fa8a2cb5" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.326 [INFO][4434] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" iface="eth0" netns="/var/run/netns/cni-8a087ee9-ae7a-37f9-b475-5919fa8a2cb5" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.327 [INFO][4434] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" iface="eth0" netns="/var/run/netns/cni-8a087ee9-ae7a-37f9-b475-5919fa8a2cb5" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.328 [INFO][4434] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.328 [INFO][4434] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.487 [INFO][4485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:03.488 [INFO][4485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:04.052 [INFO][4485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:04.163 [WARNING][4485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:04.164 [INFO][4485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:04.228 [INFO][4485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:04.289924 containerd[1463]: 2026-04-14 13:33:04.257 [INFO][4434] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:04.306171 containerd[1463]: time="2026-04-14T13:33:04.301359360Z" level=info msg="TearDown network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" successfully" Apr 14 13:33:04.348546 containerd[1463]: time="2026-04-14T13:33:04.348449197Z" level=info msg="StopPodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" returns successfully" Apr 14 13:33:04.356374 systemd[1]: run-netns-cni\x2d8a087ee9\x2dae7a\x2d37f9\x2db475\x2d5919fa8a2cb5.mount: Deactivated successfully. Apr 14 13:33:04.392707 containerd[1463]: time="2026-04-14T13:33:04.392600260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5c776cfd-5dq8p,Uid:a6981126-7658-4757-a8d5-0d67c493dae2,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:04.419080 containerd[1463]: time="2026-04-14T13:33:04.418624226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:04.419080 containerd[1463]: time="2026-04-14T13:33:04.418875425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:04.419080 containerd[1463]: time="2026-04-14T13:33:04.418895438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:04.421964 containerd[1463]: time="2026-04-14T13:33:04.421561101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.398 [INFO][4456] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.398 [INFO][4456] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" iface="eth0" netns="/var/run/netns/cni-da90f352-0761-663d-02f0-b098dfe86039" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.399 [INFO][4456] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" iface="eth0" netns="/var/run/netns/cni-da90f352-0761-663d-02f0-b098dfe86039" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.399 [INFO][4456] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" iface="eth0" netns="/var/run/netns/cni-da90f352-0761-663d-02f0-b098dfe86039" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.399 [INFO][4456] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.399 [INFO][4456] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.542 [INFO][4493] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:03.542 [INFO][4493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:04.230 [INFO][4493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:04.407 [WARNING][4493] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:04.407 [INFO][4493] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:04.429 [INFO][4493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:04.575135 containerd[1463]: 2026-04-14 13:33:04.474 [INFO][4456] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:33:04.590082 containerd[1463]: time="2026-04-14T13:33:04.589672498Z" level=info msg="TearDown network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" successfully" Apr 14 13:33:04.590082 containerd[1463]: time="2026-04-14T13:33:04.590063758Z" level=info msg="StopPodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" returns successfully" Apr 14 13:33:04.623552 systemd[1]: Started cri-containerd-f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2.scope - libcontainer container f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2. Apr 14 13:33:04.625838 containerd[1463]: time="2026-04-14T13:33:04.625306654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-n22g4,Uid:02259ab1-493b-4927-8c23-c062c006fdf7,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:04.794422 containerd[1463]: time="2026-04-14T13:33:04.790747444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:04.802021 containerd[1463]: time="2026-04-14T13:33:04.801790089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 14 13:33:04.809082 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:04.810964 containerd[1463]: time="2026-04-14T13:33:04.810451221Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:04.847849 containerd[1463]: time="2026-04-14T13:33:04.847712904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:04.851529 containerd[1463]: time="2026-04-14T13:33:04.850791637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 4.646355809s" Apr 14 13:33:04.851529 containerd[1463]: time="2026-04-14T13:33:04.850902390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 14 13:33:04.903001 containerd[1463]: time="2026-04-14T13:33:04.899702174Z" level=info msg="CreateContainer within sandbox \"162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 14 13:33:05.035978 containerd[1463]: time="2026-04-14T13:33:05.035492083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-7wj62,Uid:c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2\"" Apr 14 13:33:05.035978 containerd[1463]: time="2026-04-14T13:33:05.035964723Z" level=info msg="CreateContainer within sandbox \"162e0354ee203a800bf608c78a4b99d3bd0458547ff2fc966b2fd148ffe7606a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"38a03fb3b936d0e9fe5076296bedf878b8d1d2eebabd227630652aa73bd9167b\"" Apr 14 13:33:05.055153 containerd[1463]: time="2026-04-14T13:33:05.054528095Z" level=info msg="StartContainer for \"38a03fb3b936d0e9fe5076296bedf878b8d1d2eebabd227630652aa73bd9167b\"" Apr 14 13:33:05.069244 containerd[1463]: time="2026-04-14T13:33:05.069159018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 13:33:05.115309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806103710.mount: Deactivated successfully. Apr 14 13:33:05.115404 systemd[1]: run-netns-cni\x2dda90f352\x2d0761\x2d663d\x2d02f0\x2db098dfe86039.mount: Deactivated successfully. Apr 14 13:33:05.194448 systemd[1]: Started cri-containerd-38a03fb3b936d0e9fe5076296bedf878b8d1d2eebabd227630652aa73bd9167b.scope - libcontainer container 38a03fb3b936d0e9fe5076296bedf878b8d1d2eebabd227630652aa73bd9167b. Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:04.925 [INFO][4514] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:04.926 [INFO][4514] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" iface="eth0" netns="/var/run/netns/cni-e44fc31e-fb5b-b53f-1f6d-3e2cec6495ba" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:04.926 [INFO][4514] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" iface="eth0" netns="/var/run/netns/cni-e44fc31e-fb5b-b53f-1f6d-3e2cec6495ba" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:04.926 [INFO][4514] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" iface="eth0" netns="/var/run/netns/cni-e44fc31e-fb5b-b53f-1f6d-3e2cec6495ba" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:04.926 [INFO][4514] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:04.926 [INFO][4514] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.094 [INFO][4625] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.097 [INFO][4625] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.099 [INFO][4625] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.275 [WARNING][4625] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.276 [INFO][4625] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.291 [INFO][4625] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:05.299392 containerd[1463]: 2026-04-14 13:33:05.292 [INFO][4514] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:05.306483 containerd[1463]: time="2026-04-14T13:33:05.299446294Z" level=info msg="TearDown network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" successfully" Apr 14 13:33:05.306483 containerd[1463]: time="2026-04-14T13:33:05.299504095Z" level=info msg="StopPodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" returns successfully" Apr 14 13:33:05.305323 systemd[1]: run-netns-cni\x2de44fc31e\x2dfb5b\x2db53f\x2d1f6d\x2d3e2cec6495ba.mount: Deactivated successfully. Apr 14 13:33:05.310606 containerd[1463]: time="2026-04-14T13:33:05.310508911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lq4vt,Uid:a573c7f5-88e5-4897-8831-187a489d5981,Namespace:calico-system,Attempt:1,}" Apr 14 13:33:05.499625 containerd[1463]: time="2026-04-14T13:33:05.498331845Z" level=info msg="StartContainer for \"38a03fb3b936d0e9fe5076296bedf878b8d1d2eebabd227630652aa73bd9167b\" returns successfully" Apr 14 13:33:05.774864 kubelet[2519]: I0414 13:33:05.774461 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-897c476d7-55dwk" podStartSLOduration=4.071218595 podStartE2EDuration="11.774418729s" podCreationTimestamp="2026-04-14 13:32:54 +0000 UTC" firstStartedPulling="2026-04-14 13:32:57.149674558 +0000 UTC m=+65.767289124" lastFinishedPulling="2026-04-14 13:33:04.852874686 +0000 UTC m=+73.470489258" observedRunningTime="2026-04-14 13:33:05.765473392 +0000 UTC m=+74.383087972" watchObservedRunningTime="2026-04-14 13:33:05.774418729 +0000 UTC m=+74.392033306" Apr 14 13:33:05.793644 systemd-networkd[1400]: califb4966f5b8a: Gained IPv6LL Apr 14 13:33:06.080149 systemd-networkd[1400]: cali0edb9ea8d23: Link UP Apr 14 13:33:06.089292 systemd-networkd[1400]: cali0edb9ea8d23: Gained carrier Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:04.631 [INFO][4532] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cps8s-eth0 csi-node-driver- calico-system c2556d03-7ce4-4031-9834-67fb67a536f0 1074 0 2026-04-14 13:32:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cps8s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0edb9ea8d23 [] [] }} ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:04.632 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.067 [INFO][4607] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" HandleID="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.187 [INFO][4607] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" HandleID="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0007844b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cps8s", "timestamp":"2026-04-14 13:33:05.067680554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e9340)} Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.188 [INFO][4607] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.291 [INFO][4607] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.291 [INFO][4607] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.302 [INFO][4607] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.506 [INFO][4607] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.666 [INFO][4607] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.712 [INFO][4607] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.739 [INFO][4607] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.740 [INFO][4607] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.774 [INFO][4607] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201 Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:05.878 [INFO][4607] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:06.058 [INFO][4607] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:06.058 [INFO][4607] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" host="localhost" Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:06.060 [INFO][4607] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:06.324841 containerd[1463]: 2026-04-14 13:33:06.060 [INFO][4607] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" HandleID="k8s-pod-network.14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.327589 containerd[1463]: 2026-04-14 13:33:06.065 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cps8s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2556d03-7ce4-4031-9834-67fb67a536f0", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cps8s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0edb9ea8d23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:06.327589 containerd[1463]: 2026-04-14 13:33:06.069 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.327589 containerd[1463]: 2026-04-14 13:33:06.069 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0edb9ea8d23 ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.327589 containerd[1463]: 2026-04-14 13:33:06.088 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.327589 containerd[1463]: 2026-04-14 13:33:06.093 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cps8s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2556d03-7ce4-4031-9834-67fb67a536f0", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201", Pod:"csi-node-driver-cps8s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0edb9ea8d23", MAC:"3e:9c:eb:6d:1e:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:06.327589 containerd[1463]: 2026-04-14 13:33:06.292 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201" Namespace="calico-system" Pod="csi-node-driver-cps8s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:06.437403 containerd[1463]: time="2026-04-14T13:33:06.401482978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:06.437403 containerd[1463]: time="2026-04-14T13:33:06.435333435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:06.437403 containerd[1463]: time="2026-04-14T13:33:06.435501306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:06.438323 containerd[1463]: time="2026-04-14T13:33:06.436083599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:06.542165 systemd[1]: Started cri-containerd-14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201.scope - libcontainer container 14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201. Apr 14 13:33:06.656424 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:06.697709 containerd[1463]: time="2026-04-14T13:33:06.697270941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cps8s,Uid:c2556d03-7ce4-4031-9834-67fb67a536f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201\"" Apr 14 13:33:06.811928 containerd[1463]: time="2026-04-14T13:33:06.809076024Z" level=info msg="StopPodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\"" Apr 14 13:33:06.812859 containerd[1463]: time="2026-04-14T13:33:06.812778785Z" level=info msg="StopPodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\"" Apr 14 13:33:06.813170 kubelet[2519]: E0414 13:33:06.813144 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:07.113321 systemd-networkd[1400]: cali346f3c6c6fa: Link UP Apr 14 13:33:07.116514 systemd-networkd[1400]: cali346f3c6c6fa: Gained carrier Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:04.907 [INFO][4566] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0 calico-kube-controllers-6f5c776cfd- calico-system a6981126-7658-4757-a8d5-0d67c493dae2 1075 0 2026-04-14 13:32:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f5c776cfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f5c776cfd-5dq8p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali346f3c6c6fa [] [] }} ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:04.909 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:05.141 [INFO][4642] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" HandleID="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:05.280 [INFO][4642] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" HandleID="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f5c776cfd-5dq8p", "timestamp":"2026-04-14 13:33:05.141442847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a9ce0)} Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:05.281 [INFO][4642] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.058 [INFO][4642] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.059 [INFO][4642] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.154 [INFO][4642] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.270 [INFO][4642] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.484 [INFO][4642] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.559 [INFO][4642] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.657 [INFO][4642] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.658 [INFO][4642] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.758 [INFO][4642] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3 Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.884 [INFO][4642] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.965 [INFO][4642] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.973 [INFO][4642] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" host="localhost" Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.977 [INFO][4642] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:07.239829 containerd[1463]: 2026-04-14 13:33:06.978 [INFO][4642] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" HandleID="k8s-pod-network.67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.243607 containerd[1463]: 2026-04-14 13:33:07.106 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0", GenerateName:"calico-kube-controllers-6f5c776cfd-", Namespace:"calico-system", SelfLink:"", UID:"a6981126-7658-4757-a8d5-0d67c493dae2", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f5c776cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f5c776cfd-5dq8p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali346f3c6c6fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:07.243607 containerd[1463]: 2026-04-14 13:33:07.106 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.243607 containerd[1463]: 2026-04-14 13:33:07.106 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali346f3c6c6fa ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.243607 containerd[1463]: 2026-04-14 13:33:07.113 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.243607 containerd[1463]: 2026-04-14 13:33:07.123 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0", GenerateName:"calico-kube-controllers-6f5c776cfd-", Namespace:"calico-system", SelfLink:"", UID:"a6981126-7658-4757-a8d5-0d67c493dae2", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f5c776cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3", Pod:"calico-kube-controllers-6f5c776cfd-5dq8p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali346f3c6c6fa", MAC:"16:88:a7:49:d8:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:07.243607 containerd[1463]: 2026-04-14 13:33:07.233 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3" Namespace="calico-system" Pod="calico-kube-controllers-6f5c776cfd-5dq8p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:07.419995 containerd[1463]: time="2026-04-14T13:33:07.417503966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:07.419995 containerd[1463]: time="2026-04-14T13:33:07.417659526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:07.419995 containerd[1463]: time="2026-04-14T13:33:07.417685441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:07.419995 containerd[1463]: time="2026-04-14T13:33:07.418157283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:07.555652 systemd[1]: run-containerd-runc-k8s.io-67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3-runc.74Mwcd.mount: Deactivated successfully. Apr 14 13:33:07.573403 systemd[1]: Started cri-containerd-67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3.scope - libcontainer container 67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3. Apr 14 13:33:07.586036 systemd-networkd[1400]: cali0edb9ea8d23: Gained IPv6LL Apr 14 13:33:07.630002 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:07.794697 systemd-networkd[1400]: calicf12efac954: Link UP Apr 14 13:33:07.806873 systemd-networkd[1400]: calicf12efac954: Gained carrier Apr 14 13:33:08.079319 containerd[1463]: time="2026-04-14T13:33:08.077319914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5c776cfd-5dq8p,Uid:a6981126-7658-4757-a8d5-0d67c493dae2,Namespace:calico-system,Attempt:1,} returns sandbox id \"67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3\"" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:05.259 [INFO][4608] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0 calico-apiserver-55877c889c- calico-system 02259ab1-493b-4927-8c23-c062c006fdf7 1076 0 2026-04-14 13:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55877c889c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55877c889c-n22g4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calicf12efac954 [] [] }} ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:05.265 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:05.444 [INFO][4678] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" HandleID="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:05.510 [INFO][4678] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" HandleID="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-55877c889c-n22g4", "timestamp":"2026-04-14 13:33:05.444666052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001b82c0)} Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:05.512 [INFO][4678] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:06.977 [INFO][4678] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:06.978 [INFO][4678] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.137 [INFO][4678] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.208 [INFO][4678] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.344 [INFO][4678] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.362 [INFO][4678] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.394 [INFO][4678] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.394 [INFO][4678] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.418 [INFO][4678] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917 Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.462 [INFO][4678] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.583 [INFO][4678] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.601 [INFO][4678] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" host="localhost" Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.636 [INFO][4678] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:08.194453 containerd[1463]: 2026-04-14 13:33:07.637 [INFO][4678] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" HandleID="k8s-pod-network.1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.200553 containerd[1463]: 2026-04-14 13:33:07.700 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"02259ab1-493b-4927-8c23-c062c006fdf7", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55877c889c-n22g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf12efac954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:08.200553 containerd[1463]: 2026-04-14 13:33:07.700 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.200553 containerd[1463]: 2026-04-14 13:33:07.700 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf12efac954 ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.200553 containerd[1463]: 2026-04-14 13:33:07.822 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.200553 containerd[1463]: 2026-04-14 13:33:07.825 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"02259ab1-493b-4927-8c23-c062c006fdf7", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917", Pod:"calico-apiserver-55877c889c-n22g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf12efac954", MAC:"32:b7:15:6b:a4:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:08.200553 containerd[1463]: 2026-04-14 13:33:08.171 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917" Namespace="calico-system" Pod="calico-apiserver-55877c889c-n22g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:33:08.399300 containerd[1463]: time="2026-04-14T13:33:08.393723605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:08.399300 containerd[1463]: time="2026-04-14T13:33:08.395377571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:08.399300 containerd[1463]: time="2026-04-14T13:33:08.395399066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:08.399300 containerd[1463]: time="2026-04-14T13:33:08.397141195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:08.421178 systemd-networkd[1400]: cali346f3c6c6fa: Gained IPv6LL Apr 14 13:33:08.489237 systemd[1]: Started cri-containerd-1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917.scope - libcontainer container 1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917. Apr 14 13:33:08.690169 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:08.865196 systemd-networkd[1400]: calicf12efac954: Gained IPv6LL Apr 14 13:33:08.965211 containerd[1463]: time="2026-04-14T13:33:08.963767328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55877c889c-n22g4,Uid:02259ab1-493b-4927-8c23-c062c006fdf7,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917\"" Apr 14 13:33:08.995741 systemd-networkd[1400]: cali9be359ce271: Link UP Apr 14 13:33:09.032730 systemd-networkd[1400]: cali9be359ce271: Gained carrier Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.408 [INFO][4824] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.411 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" iface="eth0" netns="/var/run/netns/cni-a3839fea-7c12-5fc1-d2a3-b39945bc4cea" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.416 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" iface="eth0" netns="/var/run/netns/cni-a3839fea-7c12-5fc1-d2a3-b39945bc4cea" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.417 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" iface="eth0" netns="/var/run/netns/cni-a3839fea-7c12-5fc1-d2a3-b39945bc4cea" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.417 [INFO][4824] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.417 [INFO][4824] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.861 [INFO][4865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:07.872 [INFO][4865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:08.898 [INFO][4865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:09.054 [WARNING][4865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:09.054 [INFO][4865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:09.257 [INFO][4865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:09.267878 containerd[1463]: 2026-04-14 13:33:09.260 [INFO][4824] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:33:09.277126 containerd[1463]: time="2026-04-14T13:33:09.276406532Z" level=info msg="TearDown network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" successfully" Apr 14 13:33:09.277126 containerd[1463]: time="2026-04-14T13:33:09.276472150Z" level=info msg="StopPodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" returns successfully" Apr 14 13:33:09.320681 kubelet[2519]: E0414 13:33:09.299615 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:09.347645 systemd[1]: run-netns-cni\x2da3839fea\x2d7c12\x2d5fc1\x2dd2a3\x2db39945bc4cea.mount: Deactivated successfully. Apr 14 13:33:09.354025 containerd[1463]: time="2026-04-14T13:33:09.353859219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpmv2,Uid:c29fe4b2-bccb-43ac-94ff-906cb974bbf2,Namespace:kube-system,Attempt:1,}" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:05.787 [INFO][4684] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--lq4vt-eth0 goldmane-5b85766d88- calico-system a573c7f5-88e5-4897-8831-187a489d5981 1086 0 2026-04-14 13:32:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-lq4vt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9be359ce271 [] [] }} ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:05.787 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:06.144 [INFO][4728] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" HandleID="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:06.246 [INFO][4728] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" HandleID="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-lq4vt", "timestamp":"2026-04-14 13:33:06.144715261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f7080)} Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:06.246 [INFO][4728] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:07.644 [INFO][4728] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:07.644 [INFO][4728] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:07.996 [INFO][4728] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.206 [INFO][4728] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.402 [INFO][4728] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.428 [INFO][4728] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.580 [INFO][4728] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.591 [INFO][4728] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.629 [INFO][4728] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661 Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.779 [INFO][4728] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.896 [INFO][4728] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.897 [INFO][4728] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" host="localhost" Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.898 [INFO][4728] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:09.367042 containerd[1463]: 2026-04-14 13:33:08.900 [INFO][4728] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" HandleID="k8s-pod-network.6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.370570 containerd[1463]: 2026-04-14 13:33:08.952 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--lq4vt-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a573c7f5-88e5-4897-8831-187a489d5981", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-lq4vt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9be359ce271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:09.370570 containerd[1463]: 2026-04-14 13:33:08.961 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.370570 containerd[1463]: 2026-04-14 13:33:08.962 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9be359ce271 ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.370570 containerd[1463]: 2026-04-14 13:33:09.031 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.370570 containerd[1463]: 2026-04-14 13:33:09.032 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--lq4vt-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a573c7f5-88e5-4897-8831-187a489d5981", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661", Pod:"goldmane-5b85766d88-lq4vt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9be359ce271", MAC:"b6:fb:a2:1a:1c:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:09.370570 containerd[1463]: 2026-04-14 13:33:09.363 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661" Namespace="calico-system" Pod="goldmane-5b85766d88-lq4vt" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:07.454 [INFO][4830] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:07.467 [INFO][4830] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" iface="eth0" netns="/var/run/netns/cni-8690eceb-6441-b5c9-d0a2-f66562c197e1" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:07.467 [INFO][4830] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" iface="eth0" netns="/var/run/netns/cni-8690eceb-6441-b5c9-d0a2-f66562c197e1" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:07.467 [INFO][4830] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" iface="eth0" netns="/var/run/netns/cni-8690eceb-6441-b5c9-d0a2-f66562c197e1" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:07.468 [INFO][4830] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:07.468 [INFO][4830] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:08.025 [INFO][4878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:08.034 [INFO][4878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:09.257 [INFO][4878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:09.589 [WARNING][4878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:09.589 [INFO][4878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:09.657 [INFO][4878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:09.676346 containerd[1463]: 2026-04-14 13:33:09.665 [INFO][4830] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:09.686365 containerd[1463]: time="2026-04-14T13:33:09.676342670Z" level=info msg="TearDown network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" successfully" Apr 14 13:33:09.686365 containerd[1463]: time="2026-04-14T13:33:09.676442854Z" level=info msg="StopPodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" returns successfully" Apr 14 13:33:09.696459 kubelet[2519]: E0414 13:33:09.696283 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:09.696401 systemd[1]: run-netns-cni\x2d8690eceb\x2d6441\x2db5c9\x2dd0a2\x2df66562c197e1.mount: Deactivated successfully. Apr 14 13:33:09.699351 containerd[1463]: time="2026-04-14T13:33:09.699284413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t4bm4,Uid:eec711c2-8d03-4974-9177-e6d5f178fa6e,Namespace:kube-system,Attempt:1,}" Apr 14 13:33:09.798714 containerd[1463]: time="2026-04-14T13:33:09.798347693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:09.798714 containerd[1463]: time="2026-04-14T13:33:09.798588392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:09.798714 containerd[1463]: time="2026-04-14T13:33:09.798614846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:09.801621 containerd[1463]: time="2026-04-14T13:33:09.801520682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:09.970552 systemd[1]: Started cri-containerd-6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661.scope - libcontainer container 6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661. Apr 14 13:33:10.075944 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:10.177479 containerd[1463]: time="2026-04-14T13:33:10.176583845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lq4vt,Uid:a573c7f5-88e5-4897-8831-187a489d5981,Namespace:calico-system,Attempt:1,} returns sandbox id \"6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661\"" Apr 14 13:33:10.529743 systemd-networkd[1400]: cali9be359ce271: Gained IPv6LL Apr 14 13:33:10.781509 kubelet[2519]: E0414 13:33:10.780967 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:11.795997 systemd-networkd[1400]: cali503361c1731: Link UP Apr 14 13:33:11.801968 systemd-networkd[1400]: cali503361c1731: Gained carrier Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.138 [INFO][4991] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0 coredns-674b8bbfcf- kube-system c29fe4b2-bccb-43ac-94ff-906cb974bbf2 1111 0 2026-04-14 13:31:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bpmv2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali503361c1731 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.141 [INFO][4991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.506 [INFO][5068] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" HandleID="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.666 [INFO][5068] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" HandleID="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bpmv2", "timestamp":"2026-04-14 13:33:10.50653871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00064e580)} Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.671 [INFO][5068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.671 [INFO][5068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.671 [INFO][5068] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.784 [INFO][5068] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:10.976 [INFO][5068] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.234 [INFO][5068] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.277 [INFO][5068] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.397 [INFO][5068] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.398 [INFO][5068] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.483 [INFO][5068] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874 Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.591 [INFO][5068] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.769 [INFO][5068] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.769 [INFO][5068] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" host="localhost" Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.773 [INFO][5068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:12.185154 containerd[1463]: 2026-04-14 13:33:11.773 [INFO][5068] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" HandleID="k8s-pod-network.3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.189615 containerd[1463]: 2026-04-14 13:33:11.782 [INFO][4991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c29fe4b2-bccb-43ac-94ff-906cb974bbf2", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bpmv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali503361c1731", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:12.189615 containerd[1463]: 2026-04-14 13:33:11.782 [INFO][4991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.189615 containerd[1463]: 2026-04-14 13:33:11.782 [INFO][4991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali503361c1731 ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.189615 containerd[1463]: 2026-04-14 13:33:11.801 [INFO][4991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.189615 containerd[1463]: 2026-04-14 13:33:11.857 [INFO][4991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c29fe4b2-bccb-43ac-94ff-906cb974bbf2", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874", Pod:"coredns-674b8bbfcf-bpmv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali503361c1731", MAC:"3a:08:e0:b1:d2:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:12.189615 containerd[1463]: 2026-04-14 13:33:12.172 [INFO][4991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpmv2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:33:12.352942 containerd[1463]: time="2026-04-14T13:33:12.348968397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:12.352942 containerd[1463]: time="2026-04-14T13:33:12.349087153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:12.353486 containerd[1463]: time="2026-04-14T13:33:12.349106326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:12.363543 containerd[1463]: time="2026-04-14T13:33:12.363166267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:12.477901 systemd[1]: Started cri-containerd-3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874.scope - libcontainer container 3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874. Apr 14 13:33:12.555656 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:12.677300 containerd[1463]: time="2026-04-14T13:33:12.677185970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpmv2,Uid:c29fe4b2-bccb-43ac-94ff-906cb974bbf2,Namespace:kube-system,Attempt:1,} returns sandbox id \"3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874\"" Apr 14 13:33:12.704029 kubelet[2519]: E0414 13:33:12.702498 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:12.798121 containerd[1463]: time="2026-04-14T13:33:12.794970030Z" level=info msg="CreateContainer within sandbox \"3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 13:33:12.971402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751368663.mount: Deactivated successfully. Apr 14 13:33:12.989292 containerd[1463]: time="2026-04-14T13:33:12.988510403Z" level=info msg="CreateContainer within sandbox \"3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4d5fdf55c725aabdcc46e5d597755374593ae7ad4e7cb710b8c84e1eb555ffc\"" Apr 14 13:33:12.990126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488102568.mount: Deactivated successfully. Apr 14 13:33:12.991028 containerd[1463]: time="2026-04-14T13:33:12.990972862Z" level=info msg="StartContainer for \"d4d5fdf55c725aabdcc46e5d597755374593ae7ad4e7cb710b8c84e1eb555ffc\"" Apr 14 13:33:13.195071 systemd-networkd[1400]: cali6ddd9e5ba3a: Link UP Apr 14 13:33:13.217641 systemd-networkd[1400]: cali6ddd9e5ba3a: Gained carrier Apr 14 13:33:13.263213 systemd[1]: Started cri-containerd-d4d5fdf55c725aabdcc46e5d597755374593ae7ad4e7cb710b8c84e1eb555ffc.scope - libcontainer container d4d5fdf55c725aabdcc46e5d597755374593ae7ad4e7cb710b8c84e1eb555ffc. Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:10.363 [INFO][5027] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0 coredns-674b8bbfcf- kube-system eec711c2-8d03-4974-9177-e6d5f178fa6e 1112 0 2026-04-14 13:31:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-t4bm4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ddd9e5ba3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:10.366 [INFO][5027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:10.791 [INFO][5086] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" HandleID="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:10.888 [INFO][5086] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" HandleID="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012b130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-t4bm4", "timestamp":"2026-04-14 13:33:10.791339815 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192840)} Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:10.889 [INFO][5086] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:11.773 [INFO][5086] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:11.775 [INFO][5086] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:11.869 [INFO][5086] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.084 [INFO][5086] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.348 [INFO][5086] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.472 [INFO][5086] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.560 [INFO][5086] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.564 [INFO][5086] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.673 [INFO][5086] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8 Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:12.776 [INFO][5086] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:13.152 [INFO][5086] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:13.153 [INFO][5086] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" host="localhost" Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:13.154 [INFO][5086] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:13.465047 containerd[1463]: 2026-04-14 13:33:13.154 [INFO][5086] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" HandleID="k8s-pod-network.2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.471550 containerd[1463]: 2026-04-14 13:33:13.165 [INFO][5027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eec711c2-8d03-4974-9177-e6d5f178fa6e", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-t4bm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ddd9e5ba3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:13.471550 containerd[1463]: 2026-04-14 13:33:13.175 [INFO][5027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.471550 containerd[1463]: 2026-04-14 13:33:13.176 [INFO][5027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ddd9e5ba3a ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.471550 containerd[1463]: 2026-04-14 13:33:13.227 [INFO][5027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.471550 containerd[1463]: 2026-04-14 13:33:13.240 [INFO][5027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eec711c2-8d03-4974-9177-e6d5f178fa6e", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8", Pod:"coredns-674b8bbfcf-t4bm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ddd9e5ba3a", MAC:"e6:65:11:c7:01:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:13.471550 containerd[1463]: 2026-04-14 13:33:13.457 [INFO][5027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8" Namespace="kube-system" Pod="coredns-674b8bbfcf-t4bm4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:13.473324 containerd[1463]: time="2026-04-14T13:33:13.468800799Z" level=info msg="StartContainer for \"d4d5fdf55c725aabdcc46e5d597755374593ae7ad4e7cb710b8c84e1eb555ffc\" returns successfully" Apr 14 13:33:13.601531 systemd-networkd[1400]: cali503361c1731: Gained IPv6LL Apr 14 13:33:13.827476 containerd[1463]: time="2026-04-14T13:33:13.826871034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:33:13.827476 containerd[1463]: time="2026-04-14T13:33:13.827051750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:33:13.827476 containerd[1463]: time="2026-04-14T13:33:13.827082104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:13.828528 containerd[1463]: time="2026-04-14T13:33:13.828249845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:33:13.925396 systemd[1]: Started cri-containerd-2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8.scope - libcontainer container 2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8. Apr 14 13:33:14.049939 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:33:14.213321 containerd[1463]: time="2026-04-14T13:33:14.210994583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t4bm4,Uid:eec711c2-8d03-4974-9177-e6d5f178fa6e,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8\"" Apr 14 13:33:14.245183 kubelet[2519]: E0414 13:33:14.244721 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:14.277995 containerd[1463]: time="2026-04-14T13:33:14.275108768Z" level=info msg="CreateContainer within sandbox \"2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 13:33:14.525127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377425225.mount: Deactivated successfully. Apr 14 13:33:14.529873 containerd[1463]: time="2026-04-14T13:33:14.526553816Z" level=info msg="CreateContainer within sandbox \"2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f857e0e563584f39429e389315e6ccab1939089e76a5fb81ab1ac96e5bad183e\"" Apr 14 13:33:14.567661 containerd[1463]: time="2026-04-14T13:33:14.567022040Z" level=info msg="StartContainer for \"f857e0e563584f39429e389315e6ccab1939089e76a5fb81ab1ac96e5bad183e\"" Apr 14 13:33:14.598949 kubelet[2519]: E0414 13:33:14.598203 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:14.881574 systemd-networkd[1400]: cali6ddd9e5ba3a: Gained IPv6LL Apr 14 13:33:14.948380 systemd[1]: Started cri-containerd-f857e0e563584f39429e389315e6ccab1939089e76a5fb81ab1ac96e5bad183e.scope - libcontainer container f857e0e563584f39429e389315e6ccab1939089e76a5fb81ab1ac96e5bad183e. Apr 14 13:33:15.135195 kubelet[2519]: I0414 13:33:15.130722 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bpmv2" podStartSLOduration=79.127719188 podStartE2EDuration="1m19.127719188s" podCreationTimestamp="2026-04-14 13:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:33:15.081163599 +0000 UTC m=+83.698778176" watchObservedRunningTime="2026-04-14 13:33:15.127719188 +0000 UTC m=+83.745333826" Apr 14 13:33:15.281097 containerd[1463]: time="2026-04-14T13:33:15.278230389Z" level=info msg="StartContainer for \"f857e0e563584f39429e389315e6ccab1939089e76a5fb81ab1ac96e5bad183e\" returns successfully" Apr 14 13:33:15.761301 kubelet[2519]: E0414 13:33:15.760606 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:15.770671 kubelet[2519]: E0414 13:33:15.766330 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:16.190019 containerd[1463]: time="2026-04-14T13:33:16.184924727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:16.200282 containerd[1463]: time="2026-04-14T13:33:16.199970671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 14 13:33:16.209139 containerd[1463]: time="2026-04-14T13:33:16.206427234Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:16.230068 containerd[1463]: time="2026-04-14T13:33:16.229894687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:16.268990 containerd[1463]: time="2026-04-14T13:33:16.264436564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 11.193701728s" Apr 14 13:33:16.268990 containerd[1463]: time="2026-04-14T13:33:16.264600867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 13:33:16.345912 containerd[1463]: time="2026-04-14T13:33:16.345491848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 14 13:33:16.388200 containerd[1463]: time="2026-04-14T13:33:16.387859226Z" level=info msg="CreateContainer within sandbox \"f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 13:33:16.555596 containerd[1463]: time="2026-04-14T13:33:16.555467769Z" level=info msg="CreateContainer within sandbox \"f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25403e2f1e5beaeb11931025157a8e8fd45dbcb919939e10764483ca9506875d\"" Apr 14 13:33:16.583576 containerd[1463]: time="2026-04-14T13:33:16.583408247Z" level=info msg="StartContainer for \"25403e2f1e5beaeb11931025157a8e8fd45dbcb919939e10764483ca9506875d\"" Apr 14 13:33:16.880248 systemd[1]: Started cri-containerd-25403e2f1e5beaeb11931025157a8e8fd45dbcb919939e10764483ca9506875d.scope - libcontainer container 25403e2f1e5beaeb11931025157a8e8fd45dbcb919939e10764483ca9506875d. Apr 14 13:33:16.957859 kubelet[2519]: E0414 13:33:16.957645 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:16.960115 kubelet[2519]: E0414 13:33:16.958435 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:17.386072 containerd[1463]: time="2026-04-14T13:33:17.383187580Z" level=info msg="StartContainer for \"25403e2f1e5beaeb11931025157a8e8fd45dbcb919939e10764483ca9506875d\" returns successfully" Apr 14 13:33:17.431645 kubelet[2519]: I0414 13:33:17.431412 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t4bm4" podStartSLOduration=81.431211688 podStartE2EDuration="1m21.431211688s" podCreationTimestamp="2026-04-14 13:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:33:16.347650619 +0000 UTC m=+84.965265186" watchObservedRunningTime="2026-04-14 13:33:17.431211688 +0000 UTC m=+86.048826254" Apr 14 13:33:17.979239 kubelet[2519]: E0414 13:33:17.979106 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:18.385875 kubelet[2519]: I0414 13:33:18.371680 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-55877c889c-7wj62" podStartSLOduration=54.098617089 podStartE2EDuration="1m5.371593138s" podCreationTimestamp="2026-04-14 13:32:13 +0000 UTC" firstStartedPulling="2026-04-14 13:33:05.068738753 +0000 UTC m=+73.686353325" lastFinishedPulling="2026-04-14 13:33:16.341714811 +0000 UTC m=+84.959329374" observedRunningTime="2026-04-14 13:33:18.337558373 +0000 UTC m=+86.955172937" watchObservedRunningTime="2026-04-14 13:33:18.371593138 +0000 UTC m=+86.989207714" Apr 14 13:33:19.077014 kubelet[2519]: I0414 13:33:19.076646 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 13:33:19.083195 kubelet[2519]: E0414 13:33:19.082036 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:19.190870 containerd[1463]: time="2026-04-14T13:33:19.180237180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:19.200945 containerd[1463]: time="2026-04-14T13:33:19.199459183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 14 13:33:19.201585 containerd[1463]: time="2026-04-14T13:33:19.201502504Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:19.294105 containerd[1463]: time="2026-04-14T13:33:19.294013305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:19.297058 containerd[1463]: time="2026-04-14T13:33:19.295537052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.949235922s" Apr 14 13:33:19.297058 containerd[1463]: time="2026-04-14T13:33:19.295584695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 14 13:33:19.326243 containerd[1463]: time="2026-04-14T13:33:19.326134431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 14 13:33:19.359877 containerd[1463]: time="2026-04-14T13:33:19.355850544Z" level=info msg="CreateContainer within sandbox \"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 14 13:33:19.522417 containerd[1463]: time="2026-04-14T13:33:19.521480992Z" level=info msg="CreateContainer within sandbox \"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"31bd91366a7fa8852e3bdbf9558bdcdac1f7615e5c17e2c80bc82e3fb2c7f19d\"" Apr 14 13:33:19.539872 containerd[1463]: time="2026-04-14T13:33:19.538010942Z" level=info msg="StartContainer for \"31bd91366a7fa8852e3bdbf9558bdcdac1f7615e5c17e2c80bc82e3fb2c7f19d\"" Apr 14 13:33:19.727959 systemd[1]: Started cri-containerd-31bd91366a7fa8852e3bdbf9558bdcdac1f7615e5c17e2c80bc82e3fb2c7f19d.scope - libcontainer container 31bd91366a7fa8852e3bdbf9558bdcdac1f7615e5c17e2c80bc82e3fb2c7f19d. Apr 14 13:33:20.313618 containerd[1463]: time="2026-04-14T13:33:20.313529519Z" level=info msg="StartContainer for \"31bd91366a7fa8852e3bdbf9558bdcdac1f7615e5c17e2c80bc82e3fb2c7f19d\" returns successfully" Apr 14 13:33:21.946954 kubelet[2519]: E0414 13:33:21.946672 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:33:22.313675 systemd[1]: run-containerd-runc-k8s.io-7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93-runc.hITjRm.mount: Deactivated successfully. Apr 14 13:33:26.643892 containerd[1463]: time="2026-04-14T13:33:26.642497824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:26.648452 containerd[1463]: time="2026-04-14T13:33:26.646253015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 14 13:33:26.649968 containerd[1463]: time="2026-04-14T13:33:26.649705574Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:26.671303 containerd[1463]: time="2026-04-14T13:33:26.671105636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:26.691704 containerd[1463]: time="2026-04-14T13:33:26.691571571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 7.365277448s" Apr 14 13:33:26.691704 containerd[1463]: time="2026-04-14T13:33:26.691717893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 14 13:33:26.761036 containerd[1463]: time="2026-04-14T13:33:26.758544707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 13:33:26.975085 containerd[1463]: time="2026-04-14T13:33:26.970941597Z" level=info msg="CreateContainer within sandbox \"67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 14 13:33:27.061069 containerd[1463]: time="2026-04-14T13:33:27.060673844Z" level=info msg="CreateContainer within sandbox \"67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce\"" Apr 14 13:33:27.072972 containerd[1463]: time="2026-04-14T13:33:27.068568432Z" level=info msg="StartContainer for \"76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce\"" Apr 14 13:33:27.369504 systemd[1]: Started cri-containerd-76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce.scope - libcontainer container 76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce. Apr 14 13:33:27.486988 containerd[1463]: time="2026-04-14T13:33:27.486147894Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:27.488576 containerd[1463]: time="2026-04-14T13:33:27.488544861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 14 13:33:27.547644 containerd[1463]: time="2026-04-14T13:33:27.547498953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 788.484956ms" Apr 14 13:33:27.547644 containerd[1463]: time="2026-04-14T13:33:27.547675785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 13:33:27.555862 containerd[1463]: time="2026-04-14T13:33:27.553459840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 14 13:33:27.729408 containerd[1463]: time="2026-04-14T13:33:27.727398982Z" level=info msg="CreateContainer within sandbox \"1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 13:33:27.924065 containerd[1463]: time="2026-04-14T13:33:27.918585154Z" level=info msg="CreateContainer within sandbox \"1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a27a4d2b139e253dce4177524d429e5198850b2405b8a343462fcb3f7de549f4\"" Apr 14 13:33:27.932972 containerd[1463]: time="2026-04-14T13:33:27.929028538Z" level=info msg="StartContainer for \"a27a4d2b139e253dce4177524d429e5198850b2405b8a343462fcb3f7de549f4\"" Apr 14 13:33:28.104945 systemd[1]: Started cri-containerd-a27a4d2b139e253dce4177524d429e5198850b2405b8a343462fcb3f7de549f4.scope - libcontainer container a27a4d2b139e253dce4177524d429e5198850b2405b8a343462fcb3f7de549f4. Apr 14 13:33:28.323448 containerd[1463]: time="2026-04-14T13:33:28.323333093Z" level=info msg="StartContainer for \"76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce\" returns successfully" Apr 14 13:33:28.450186 containerd[1463]: time="2026-04-14T13:33:28.443247405Z" level=info msg="StartContainer for \"a27a4d2b139e253dce4177524d429e5198850b2405b8a343462fcb3f7de549f4\" returns successfully" Apr 14 13:33:29.760494 kubelet[2519]: I0414 13:33:29.760135 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f5c776cfd-5dq8p" podStartSLOduration=55.103057144 podStartE2EDuration="1m13.760087381s" podCreationTimestamp="2026-04-14 13:32:16 +0000 UTC" firstStartedPulling="2026-04-14 13:33:08.096167371 +0000 UTC m=+76.713781974" lastFinishedPulling="2026-04-14 13:33:26.753197645 +0000 UTC m=+95.370812211" observedRunningTime="2026-04-14 13:33:29.721029498 +0000 UTC m=+98.338644078" watchObservedRunningTime="2026-04-14 13:33:29.760087381 +0000 UTC m=+98.377701955" Apr 14 13:33:30.323388 kubelet[2519]: I0414 13:33:30.323101 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-55877c889c-n22g4" podStartSLOduration=58.743320458 podStartE2EDuration="1m17.322896958s" podCreationTimestamp="2026-04-14 13:32:13 +0000 UTC" firstStartedPulling="2026-04-14 13:33:08.973073748 +0000 UTC m=+77.590688322" lastFinishedPulling="2026-04-14 13:33:27.552650258 +0000 UTC m=+96.170264822" observedRunningTime="2026-04-14 13:33:30.15271643 +0000 UTC m=+98.770331034" watchObservedRunningTime="2026-04-14 13:33:30.322896958 +0000 UTC m=+98.940511533" Apr 14 13:33:33.188004 kubelet[2519]: I0414 13:33:33.187914 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 13:33:35.570359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798146435.mount: Deactivated successfully. Apr 14 13:33:37.612073 containerd[1463]: time="2026-04-14T13:33:37.610694220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:37.613703 containerd[1463]: time="2026-04-14T13:33:37.612848992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 14 13:33:37.616515 containerd[1463]: time="2026-04-14T13:33:37.616376153Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:37.623147 containerd[1463]: time="2026-04-14T13:33:37.622714308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:37.625705 containerd[1463]: time="2026-04-14T13:33:37.625521868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 10.07202173s" Apr 14 13:33:37.625705 containerd[1463]: time="2026-04-14T13:33:37.625612095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 14 13:33:37.677301 containerd[1463]: time="2026-04-14T13:33:37.676919591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 14 13:33:37.816311 containerd[1463]: time="2026-04-14T13:33:37.816162944Z" level=info msg="CreateContainer within sandbox \"6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 14 13:33:38.003971 containerd[1463]: time="2026-04-14T13:33:38.003630171Z" level=info msg="CreateContainer within sandbox \"6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a\"" Apr 14 13:33:38.015904 containerd[1463]: time="2026-04-14T13:33:38.015549701Z" level=info msg="StartContainer for \"d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a\"" Apr 14 13:33:38.279501 systemd[1]: run-containerd-runc-k8s.io-d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a-runc.xcfGX8.mount: Deactivated successfully. Apr 14 13:33:38.292300 systemd[1]: Started cri-containerd-d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a.scope - libcontainer container d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a. Apr 14 13:33:38.749382 containerd[1463]: time="2026-04-14T13:33:38.747546971Z" level=info msg="StartContainer for \"d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a\" returns successfully" Apr 14 13:33:41.381868 containerd[1463]: time="2026-04-14T13:33:41.379929235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:41.381868 containerd[1463]: time="2026-04-14T13:33:41.381428667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 14 13:33:41.387497 containerd[1463]: time="2026-04-14T13:33:41.387063507Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:41.463964 containerd[1463]: time="2026-04-14T13:33:41.463273654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:33:41.479438 containerd[1463]: time="2026-04-14T13:33:41.479302728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.802294455s" Apr 14 13:33:41.479438 containerd[1463]: time="2026-04-14T13:33:41.479389676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 14 13:33:41.593304 containerd[1463]: time="2026-04-14T13:33:41.592494418Z" level=info msg="CreateContainer within sandbox \"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 14 13:33:41.730023 containerd[1463]: time="2026-04-14T13:33:41.726292659Z" level=info msg="CreateContainer within sandbox \"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"35bf52a09f2b92730c6c276a780c656953f3e86e3c96974b99e0548cea3a1770\"" Apr 14 13:33:41.734467 containerd[1463]: time="2026-04-14T13:33:41.733700767Z" level=info msg="StartContainer for \"35bf52a09f2b92730c6c276a780c656953f3e86e3c96974b99e0548cea3a1770\"" Apr 14 13:33:41.898082 systemd[1]: run-containerd-runc-k8s.io-35bf52a09f2b92730c6c276a780c656953f3e86e3c96974b99e0548cea3a1770-runc.YP5Cx6.mount: Deactivated successfully. Apr 14 13:33:41.951066 systemd[1]: Started cri-containerd-35bf52a09f2b92730c6c276a780c656953f3e86e3c96974b99e0548cea3a1770.scope - libcontainer container 35bf52a09f2b92730c6c276a780c656953f3e86e3c96974b99e0548cea3a1770. Apr 14 13:33:42.193178 containerd[1463]: time="2026-04-14T13:33:42.193041006Z" level=info msg="StartContainer for \"35bf52a09f2b92730c6c276a780c656953f3e86e3c96974b99e0548cea3a1770\" returns successfully" Apr 14 13:33:42.597020 kubelet[2519]: I0414 13:33:42.596718 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-lq4vt" podStartSLOduration=61.184351467 podStartE2EDuration="1m28.596693859s" podCreationTimestamp="2026-04-14 13:32:14 +0000 UTC" firstStartedPulling="2026-04-14 13:33:10.261333462 +0000 UTC m=+78.878948035" lastFinishedPulling="2026-04-14 13:33:37.673675852 +0000 UTC m=+106.291290427" observedRunningTime="2026-04-14 13:33:39.270591603 +0000 UTC m=+107.888206170" watchObservedRunningTime="2026-04-14 13:33:42.596693859 +0000 UTC m=+111.214308442" Apr 14 13:33:43.943156 kubelet[2519]: I0414 13:33:43.942937 2519 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 14 13:33:43.950924 kubelet[2519]: I0414 13:33:43.950659 2519 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 14 13:33:52.155277 containerd[1463]: time="2026-04-14T13:33:52.155177367Z" level=info msg="StopPodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\"" Apr 14 13:33:52.339564 systemd[1]: run-containerd-runc-k8s.io-d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a-runc.GAkPO1.mount: Deactivated successfully. Apr 14 13:33:52.425704 systemd[1]: run-containerd-runc-k8s.io-7e193cab38afd6325f67cb839a3ee41e57dc651f6410ac94f7963395657e1c93-runc.ThsBK0.mount: Deactivated successfully. Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.093 [WARNING][5810] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--lq4vt-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a573c7f5-88e5-4897-8831-187a489d5981", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661", Pod:"goldmane-5b85766d88-lq4vt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9be359ce271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.096 [INFO][5810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.096 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" iface="eth0" netns="" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.097 [INFO][5810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.097 [INFO][5810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.538 [INFO][5841] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.538 [INFO][5841] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.538 [INFO][5841] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.568 [WARNING][5841] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.569 [INFO][5841] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.596 [INFO][5841] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:53.639585 containerd[1463]: 2026-04-14 13:33:53.629 [INFO][5810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:53.665849 containerd[1463]: time="2026-04-14T13:33:53.665669518Z" level=info msg="TearDown network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" successfully" Apr 14 13:33:53.665849 containerd[1463]: time="2026-04-14T13:33:53.665776134Z" level=info msg="StopPodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" returns successfully" Apr 14 13:33:53.780727 containerd[1463]: time="2026-04-14T13:33:53.778330006Z" level=info msg="RemovePodSandbox for \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\"" Apr 14 13:33:53.780727 containerd[1463]: time="2026-04-14T13:33:53.778394589Z" level=info msg="Forcibly stopping sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\"" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.018 [WARNING][5859] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--lq4vt-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a573c7f5-88e5-4897-8831-187a489d5981", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6414d96a91bcfa86c4461bc9fdd1e852e92468c70164f41d86b4b02ab5ad0661", Pod:"goldmane-5b85766d88-lq4vt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9be359ce271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.018 [INFO][5859] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.018 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" iface="eth0" netns="" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.018 [INFO][5859] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.018 [INFO][5859] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.091 [INFO][5867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.094 [INFO][5867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.097 [INFO][5867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.221 [WARNING][5867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.221 [INFO][5867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" HandleID="k8s-pod-network.6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Workload="localhost-k8s-goldmane--5b85766d88--lq4vt-eth0" Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.245 [INFO][5867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:54.254356 containerd[1463]: 2026-04-14 13:33:54.249 [INFO][5859] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8" Apr 14 13:33:54.258213 containerd[1463]: time="2026-04-14T13:33:54.254544199Z" level=info msg="TearDown network for sandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" successfully" Apr 14 13:33:54.285073 containerd[1463]: time="2026-04-14T13:33:54.284544318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:54.285073 containerd[1463]: time="2026-04-14T13:33:54.284759663Z" level=info msg="RemovePodSandbox \"6d653524f7de3a126610ece5759b0ac08a1adf140795b1ec5899a313f840d2f8\" returns successfully" Apr 14 13:33:54.299082 containerd[1463]: time="2026-04-14T13:33:54.298959093Z" level=info msg="StopPodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\"" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.537 [WARNING][5884] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" WorkloadEndpoint="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.547 [INFO][5884] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.549 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" iface="eth0" netns="" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.550 [INFO][5884] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.550 [INFO][5884] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.675 [INFO][5898] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.677 [INFO][5898] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.677 [INFO][5898] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.739 [WARNING][5898] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.755 [INFO][5898] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.853 [INFO][5898] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:54.865000 containerd[1463]: 2026-04-14 13:33:54.859 [INFO][5884] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:54.867109 containerd[1463]: time="2026-04-14T13:33:54.865894072Z" level=info msg="TearDown network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" successfully" Apr 14 13:33:54.867109 containerd[1463]: time="2026-04-14T13:33:54.865963689Z" level=info msg="StopPodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" returns successfully" Apr 14 13:33:54.871876 containerd[1463]: time="2026-04-14T13:33:54.870188329Z" level=info msg="RemovePodSandbox for \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\"" Apr 14 13:33:54.871876 containerd[1463]: time="2026-04-14T13:33:54.870307761Z" level=info msg="Forcibly stopping sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\"" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.066 [WARNING][5916] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" WorkloadEndpoint="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.066 [INFO][5916] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.066 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" iface="eth0" netns="" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.066 [INFO][5916] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.066 [INFO][5916] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.186 [INFO][5925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.187 [INFO][5925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.187 [INFO][5925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.271 [WARNING][5925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.275 [INFO][5925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" HandleID="k8s-pod-network.9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Workload="localhost-k8s-whisker--56b974bcd6--psxbc-eth0" Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.355 [INFO][5925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:55.385887 containerd[1463]: 2026-04-14 13:33:55.382 [INFO][5916] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8" Apr 14 13:33:55.385887 containerd[1463]: time="2026-04-14T13:33:55.385639456Z" level=info msg="TearDown network for sandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" successfully" Apr 14 13:33:55.395065 containerd[1463]: time="2026-04-14T13:33:55.394698214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:55.395954 containerd[1463]: time="2026-04-14T13:33:55.395255591Z" level=info msg="RemovePodSandbox \"9ec0ba2074f73a679f1f7a6180f3ef0d57847162c63053d9967c2e322769a8b8\" returns successfully" Apr 14 13:33:55.402587 containerd[1463]: time="2026-04-14T13:33:55.402484332Z" level=info msg="StopPodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\"" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.761 [WARNING][5942] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2", Pod:"calico-apiserver-55877c889c-7wj62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb4966f5b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.762 [INFO][5942] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.762 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" iface="eth0" netns="" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.762 [INFO][5942] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.762 [INFO][5942] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.882 [INFO][5950] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.884 [INFO][5950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:55.885 [INFO][5950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:56.046 [WARNING][5950] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:56.046 [INFO][5950] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:56.063 [INFO][5950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:56.070669 containerd[1463]: 2026-04-14 13:33:56.067 [INFO][5942] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.073632 containerd[1463]: time="2026-04-14T13:33:56.072654565Z" level=info msg="TearDown network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" successfully" Apr 14 13:33:56.073632 containerd[1463]: time="2026-04-14T13:33:56.073005387Z" level=info msg="StopPodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" returns successfully" Apr 14 13:33:56.078573 containerd[1463]: time="2026-04-14T13:33:56.077773616Z" level=info msg="RemovePodSandbox for \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\"" Apr 14 13:33:56.080571 containerd[1463]: time="2026-04-14T13:33:56.079238210Z" level=info msg="Forcibly stopping sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\"" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.448 [WARNING][5967] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"c556a0c9-d9a1-4a5f-8f2f-a48e00e5d5e7", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f85b645adc38912846e3c94c6a3cbeda0d9b2c19fa0f01a8e6c3cb3ba36385d2", Pod:"calico-apiserver-55877c889c-7wj62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califb4966f5b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.458 [INFO][5967] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.462 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" iface="eth0" netns="" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.462 [INFO][5967] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.462 [INFO][5967] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.686 [INFO][5976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.687 [INFO][5976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.689 [INFO][5976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.726 [WARNING][5976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.726 [INFO][5976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" HandleID="k8s-pod-network.b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Workload="localhost-k8s-calico--apiserver--55877c889c--7wj62-eth0" Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.756 [INFO][5976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:56.777300 containerd[1463]: 2026-04-14 13:33:56.763 [INFO][5967] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319" Apr 14 13:33:56.777300 containerd[1463]: time="2026-04-14T13:33:56.775978389Z" level=info msg="TearDown network for sandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" successfully" Apr 14 13:33:56.882409 containerd[1463]: time="2026-04-14T13:33:56.882283995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:56.883032 containerd[1463]: time="2026-04-14T13:33:56.882504208Z" level=info msg="RemovePodSandbox \"b79afdeeb717c78dbe1ad5cd105578e4d2b6c3f72b22902d8e597a57e0f69319\" returns successfully" Apr 14 13:33:56.887694 containerd[1463]: time="2026-04-14T13:33:56.887492941Z" level=info msg="StopPodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\"" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.181 [WARNING][5992] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eec711c2-8d03-4974-9177-e6d5f178fa6e", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8", Pod:"coredns-674b8bbfcf-t4bm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ddd9e5ba3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.188 [INFO][5992] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.189 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" iface="eth0" netns="" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.189 [INFO][5992] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.189 [INFO][5992] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.370 [INFO][6001] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.371 [INFO][6001] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.371 [INFO][6001] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.415 [WARNING][6001] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.415 [INFO][6001] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.427 [INFO][6001] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:57.433244 containerd[1463]: 2026-04-14 13:33:57.429 [INFO][5992] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:57.434396 containerd[1463]: time="2026-04-14T13:33:57.433364438Z" level=info msg="TearDown network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" successfully" Apr 14 13:33:57.434396 containerd[1463]: time="2026-04-14T13:33:57.433404300Z" level=info msg="StopPodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" returns successfully" Apr 14 13:33:57.438422 containerd[1463]: time="2026-04-14T13:33:57.438306472Z" level=info msg="RemovePodSandbox for \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\"" Apr 14 13:33:57.439099 containerd[1463]: time="2026-04-14T13:33:57.438452003Z" level=info msg="Forcibly stopping sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\"" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.783 [WARNING][6024] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eec711c2-8d03-4974-9177-e6d5f178fa6e", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ba4ef45f048a7312dbc81971af447c23c3ea22485ce8110bc65a514ce7c0bd8", Pod:"coredns-674b8bbfcf-t4bm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ddd9e5ba3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.784 [INFO][6024] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.784 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" iface="eth0" netns="" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.784 [INFO][6024] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.784 [INFO][6024] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.900 [INFO][6033] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.950 [INFO][6033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.951 [INFO][6033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.998 [WARNING][6033] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:57.999 [INFO][6033] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" HandleID="k8s-pod-network.f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Workload="localhost-k8s-coredns--674b8bbfcf--t4bm4-eth0" Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:58.075 [INFO][6033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:58.092602 containerd[1463]: 2026-04-14 13:33:58.088 [INFO][6024] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af" Apr 14 13:33:58.093409 containerd[1463]: time="2026-04-14T13:33:58.092767355Z" level=info msg="TearDown network for sandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" successfully" Apr 14 13:33:58.119439 containerd[1463]: time="2026-04-14T13:33:58.119140538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:58.119915 containerd[1463]: time="2026-04-14T13:33:58.119707417Z" level=info msg="RemovePodSandbox \"f06036a2654a39e009852b4364ab47b74bc6e1d870ada3b93a89cbf8a56e29af\" returns successfully" Apr 14 13:33:58.121610 containerd[1463]: time="2026-04-14T13:33:58.121568505Z" level=info msg="StopPodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\"" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.464 [WARNING][6052] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0", GenerateName:"calico-kube-controllers-6f5c776cfd-", Namespace:"calico-system", SelfLink:"", UID:"a6981126-7658-4757-a8d5-0d67c493dae2", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f5c776cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3", Pod:"calico-kube-controllers-6f5c776cfd-5dq8p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali346f3c6c6fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.465 [INFO][6052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.465 [INFO][6052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" iface="eth0" netns="" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.465 [INFO][6052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.465 [INFO][6052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.543 [INFO][6061] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.543 [INFO][6061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.543 [INFO][6061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.646 [WARNING][6061] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.647 [INFO][6061] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.677 [INFO][6061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:58.687495 containerd[1463]: 2026-04-14 13:33:58.681 [INFO][6052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:58.688990 containerd[1463]: time="2026-04-14T13:33:58.688941622Z" level=info msg="TearDown network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" successfully" Apr 14 13:33:58.689074 containerd[1463]: time="2026-04-14T13:33:58.688993427Z" level=info msg="StopPodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" returns successfully" Apr 14 13:33:58.694575 containerd[1463]: time="2026-04-14T13:33:58.694464055Z" level=info msg="RemovePodSandbox for \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\"" Apr 14 13:33:58.694575 containerd[1463]: time="2026-04-14T13:33:58.694520154Z" level=info msg="Forcibly stopping sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\"" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:58.979 [WARNING][6079] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0", GenerateName:"calico-kube-controllers-6f5c776cfd-", Namespace:"calico-system", SelfLink:"", UID:"a6981126-7658-4757-a8d5-0d67c493dae2", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f5c776cfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67f7f756b15d6c2d80ee6617b4d2f57814c1c5fd4f3533978a5c4f5dbb298ad3", Pod:"calico-kube-controllers-6f5c776cfd-5dq8p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali346f3c6c6fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:58.980 [INFO][6079] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:58.980 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" iface="eth0" netns="" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:58.980 [INFO][6079] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:58.980 [INFO][6079] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.035 [INFO][6087] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.035 [INFO][6087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.035 [INFO][6087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.163 [WARNING][6087] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.163 [INFO][6087] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" HandleID="k8s-pod-network.249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Workload="localhost-k8s-calico--kube--controllers--6f5c776cfd--5dq8p-eth0" Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.214 [INFO][6087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:59.222873 containerd[1463]: 2026-04-14 13:33:59.216 [INFO][6079] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86" Apr 14 13:33:59.222873 containerd[1463]: time="2026-04-14T13:33:59.220621348Z" level=info msg="TearDown network for sandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" successfully" Apr 14 13:33:59.235362 containerd[1463]: time="2026-04-14T13:33:59.235090936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:33:59.235362 containerd[1463]: time="2026-04-14T13:33:59.235190861Z" level=info msg="RemovePodSandbox \"249729552e7489f16ceed6e4921c213f828ce359c893ff2f5a1fdc01881caa86\" returns successfully" Apr 14 13:33:59.237160 containerd[1463]: time="2026-04-14T13:33:59.237090308Z" level=info msg="StopPodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\"" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.504 [WARNING][6105] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cps8s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2556d03-7ce4-4031-9834-67fb67a536f0", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201", Pod:"csi-node-driver-cps8s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0edb9ea8d23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.505 [INFO][6105] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.505 [INFO][6105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" iface="eth0" netns="" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.505 [INFO][6105] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.505 [INFO][6105] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.661 [INFO][6113] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.661 [INFO][6113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.661 [INFO][6113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.707 [WARNING][6113] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.708 [INFO][6113] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.727 [INFO][6113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:33:59.731442 containerd[1463]: 2026-04-14 13:33:59.729 [INFO][6105] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:33:59.734158 containerd[1463]: time="2026-04-14T13:33:59.731553158Z" level=info msg="TearDown network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" successfully" Apr 14 13:33:59.734158 containerd[1463]: time="2026-04-14T13:33:59.731602010Z" level=info msg="StopPodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" returns successfully" Apr 14 13:33:59.734460 containerd[1463]: time="2026-04-14T13:33:59.734416047Z" level=info msg="RemovePodSandbox for \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\"" Apr 14 13:33:59.734492 containerd[1463]: time="2026-04-14T13:33:59.734470393Z" level=info msg="Forcibly stopping sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\"" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.070 [WARNING][6132] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cps8s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2556d03-7ce4-4031-9834-67fb67a536f0", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14d8945f6c0e7b1fa9aaccb262f92412703f14742138764e7d4968d9881ba201", Pod:"csi-node-driver-cps8s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0edb9ea8d23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.071 [INFO][6132] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.071 [INFO][6132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" iface="eth0" netns="" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.076 [INFO][6132] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.076 [INFO][6132] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.184 [INFO][6141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.185 [INFO][6141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.185 [INFO][6141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.237 [WARNING][6141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.237 [INFO][6141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" HandleID="k8s-pod-network.cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Workload="localhost-k8s-csi--node--driver--cps8s-eth0" Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.267 [INFO][6141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:00.278262 containerd[1463]: 2026-04-14 13:34:00.271 [INFO][6132] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58" Apr 14 13:34:00.283077 containerd[1463]: time="2026-04-14T13:34:00.278554764Z" level=info msg="TearDown network for sandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" successfully" Apr 14 13:34:00.290754 containerd[1463]: time="2026-04-14T13:34:00.290209339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:00.291288 containerd[1463]: time="2026-04-14T13:34:00.291218426Z" level=info msg="RemovePodSandbox \"cebf74439a7363e58601ef2da5a3e494fe23b246761c6e3ee530cace0a2cab58\" returns successfully" Apr 14 13:34:00.293111 containerd[1463]: time="2026-04-14T13:34:00.292751591Z" level=info msg="StopPodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\"" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.664 [WARNING][6158] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"02259ab1-493b-4927-8c23-c062c006fdf7", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917", Pod:"calico-apiserver-55877c889c-n22g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf12efac954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.669 [INFO][6158] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.671 [INFO][6158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" iface="eth0" netns="" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.672 [INFO][6158] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.674 [INFO][6158] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.845 [INFO][6166] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.866 [INFO][6166] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:00.866 [INFO][6166] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:01.000 [WARNING][6166] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:01.002 [INFO][6166] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:01.042 [INFO][6166] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:01.051376 containerd[1463]: 2026-04-14 13:34:01.047 [INFO][6158] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.054258 containerd[1463]: time="2026-04-14T13:34:01.051482471Z" level=info msg="TearDown network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" successfully" Apr 14 13:34:01.054258 containerd[1463]: time="2026-04-14T13:34:01.051567891Z" level=info msg="StopPodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" returns successfully" Apr 14 13:34:01.054258 containerd[1463]: time="2026-04-14T13:34:01.054238819Z" level=info msg="RemovePodSandbox for \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\"" Apr 14 13:34:01.054422 containerd[1463]: time="2026-04-14T13:34:01.054277783Z" level=info msg="Forcibly stopping sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\"" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.353 [WARNING][6186] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0", GenerateName:"calico-apiserver-55877c889c-", Namespace:"calico-system", SelfLink:"", UID:"02259ab1-493b-4927-8c23-c062c006fdf7", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55877c889c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a7fc3b3e856f737a04e02c866b7e37e3e5e2737ea9e7156c41e1006cdd46917", Pod:"calico-apiserver-55877c889c-n22g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicf12efac954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.353 [INFO][6186] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.353 [INFO][6186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" iface="eth0" netns="" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.353 [INFO][6186] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.353 [INFO][6186] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.476 [INFO][6194] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.478 [INFO][6194] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.478 [INFO][6194] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.553 [WARNING][6194] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.554 [INFO][6194] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" HandleID="k8s-pod-network.e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Workload="localhost-k8s-calico--apiserver--55877c889c--n22g4-eth0" Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.636 [INFO][6194] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:01.651699 containerd[1463]: 2026-04-14 13:34:01.645 [INFO][6186] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1" Apr 14 13:34:01.652624 containerd[1463]: time="2026-04-14T13:34:01.651761877Z" level=info msg="TearDown network for sandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" successfully" Apr 14 13:34:01.661011 containerd[1463]: time="2026-04-14T13:34:01.660903628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:01.661011 containerd[1463]: time="2026-04-14T13:34:01.661023477Z" level=info msg="RemovePodSandbox \"e71bc4a2d497bd5a253e028a6fe86abfb5ef8e7986f469b73de3e3c08c04bde1\" returns successfully" Apr 14 13:34:01.668695 containerd[1463]: time="2026-04-14T13:34:01.666771862Z" level=info msg="StopPodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\"" Apr 14 13:34:01.796853 kubelet[2519]: E0414 13:34:01.795152 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.883 [WARNING][6229] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c29fe4b2-bccb-43ac-94ff-906cb974bbf2", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874", Pod:"coredns-674b8bbfcf-bpmv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali503361c1731", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.883 [INFO][6229] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.883 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" iface="eth0" netns="" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.883 [INFO][6229] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.883 [INFO][6229] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.973 [INFO][6240] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.977 [INFO][6240] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:01.978 [INFO][6240] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:02.069 [WARNING][6240] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:02.069 [INFO][6240] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:02.113 [INFO][6240] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:02.120868 containerd[1463]: 2026-04-14 13:34:02.116 [INFO][6229] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.122586 containerd[1463]: time="2026-04-14T13:34:02.121042036Z" level=info msg="TearDown network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" successfully" Apr 14 13:34:02.122586 containerd[1463]: time="2026-04-14T13:34:02.121075836Z" level=info msg="StopPodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" returns successfully" Apr 14 13:34:02.122586 containerd[1463]: time="2026-04-14T13:34:02.122322844Z" level=info msg="RemovePodSandbox for \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\"" Apr 14 13:34:02.122586 containerd[1463]: time="2026-04-14T13:34:02.122351657Z" level=info msg="Forcibly stopping sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\"" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.672 [WARNING][6258] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c29fe4b2-bccb-43ac-94ff-906cb974bbf2", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ede9fadffa5654ce455ae8586583a1e8155a918821cd3a73b08741b1c9a1874", Pod:"coredns-674b8bbfcf-bpmv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali503361c1731", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.674 [INFO][6258] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.675 [INFO][6258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" iface="eth0" netns="" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.675 [INFO][6258] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.675 [INFO][6258] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.822 [INFO][6267] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.824 [INFO][6267] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.824 [INFO][6267] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.882 [WARNING][6267] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.883 [INFO][6267] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" HandleID="k8s-pod-network.d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Workload="localhost-k8s-coredns--674b8bbfcf--bpmv2-eth0" Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.932 [INFO][6267] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:34:02.939459 containerd[1463]: 2026-04-14 13:34:02.936 [INFO][6258] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc" Apr 14 13:34:02.942121 containerd[1463]: time="2026-04-14T13:34:02.939725360Z" level=info msg="TearDown network for sandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" successfully" Apr 14 13:34:02.957075 containerd[1463]: time="2026-04-14T13:34:02.956889421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:34:02.957075 containerd[1463]: time="2026-04-14T13:34:02.957099171Z" level=info msg="RemovePodSandbox \"d536ec14c5d503c7ab88411664f8dd6a65def86418f462e8047669a7b1bc94dc\" returns successfully" Apr 14 13:34:10.944162 kubelet[2519]: I0414 13:34:10.943679 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cps8s" podStartSLOduration=81.119171533 podStartE2EDuration="1m55.94359466s" podCreationTimestamp="2026-04-14 13:32:15 +0000 UTC" firstStartedPulling="2026-04-14 13:33:06.730443243 +0000 UTC m=+75.348057815" lastFinishedPulling="2026-04-14 13:33:41.554866376 +0000 UTC m=+110.172480942" observedRunningTime="2026-04-14 13:33:42.597446203 +0000 UTC m=+111.215060785" watchObservedRunningTime="2026-04-14 13:34:10.94359466 +0000 UTC m=+139.561209234" Apr 14 13:34:14.775589 kubelet[2519]: E0414 13:34:14.775461 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:21.795537 kubelet[2519]: E0414 13:34:21.795132 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:34.268155 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:56660.service - OpenSSH per-connection server daemon (10.0.0.1:56660). Apr 14 13:34:34.520217 sshd[6408]: Accepted publickey for core from 10.0.0.1 port 56660 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:34.522217 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:34.534230 systemd-logind[1450]: New session 8 of user core. Apr 14 13:34:34.545858 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 13:34:35.524462 sshd[6408]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:35.538343 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Apr 14 13:34:35.542735 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:56660.service: Deactivated successfully. Apr 14 13:34:35.551486 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 13:34:35.560539 systemd-logind[1450]: Removed session 8. Apr 14 13:34:35.799676 kubelet[2519]: E0414 13:34:35.798490 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:36.770132 kubelet[2519]: E0414 13:34:36.762366 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:39.791862 kubelet[2519]: E0414 13:34:39.790232 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:40.575563 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:41944.service - OpenSSH per-connection server daemon (10.0.0.1:41944). Apr 14 13:34:40.749213 sshd[6464]: Accepted publickey for core from 10.0.0.1 port 41944 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:40.748690 sshd[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:40.768298 systemd-logind[1450]: New session 9 of user core. Apr 14 13:34:40.776358 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 13:34:41.229655 sshd[6464]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:41.261735 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:41944.service: Deactivated successfully. Apr 14 13:34:41.267053 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 13:34:41.268180 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Apr 14 13:34:41.269349 systemd-logind[1450]: Removed session 9. Apr 14 13:34:46.293332 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:41954.service - OpenSSH per-connection server daemon (10.0.0.1:41954). Apr 14 13:34:46.485007 sshd[6479]: Accepted publickey for core from 10.0.0.1 port 41954 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:46.493696 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:46.532391 systemd-logind[1450]: New session 10 of user core. Apr 14 13:34:46.543357 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 13:34:47.250478 sshd[6479]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:47.263370 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:41954.service: Deactivated successfully. Apr 14 13:34:47.282792 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 13:34:47.285124 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Apr 14 13:34:47.300535 systemd-logind[1450]: Removed session 10. Apr 14 13:34:47.781126 kubelet[2519]: E0414 13:34:47.781023 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:34:52.277339 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:41398.service - OpenSSH per-connection server daemon (10.0.0.1:41398). Apr 14 13:34:52.551747 sshd[6539]: Accepted publickey for core from 10.0.0.1 port 41398 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:52.561534 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:52.597942 systemd-logind[1450]: New session 11 of user core. Apr 14 13:34:52.615860 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 13:34:53.264743 sshd[6539]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:53.294679 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:41398.service: Deactivated successfully. Apr 14 13:34:53.321636 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 13:34:53.325414 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Apr 14 13:34:53.329894 systemd-logind[1450]: Removed session 11. Apr 14 13:34:58.304993 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:41404.service - OpenSSH per-connection server daemon (10.0.0.1:41404). Apr 14 13:34:58.481235 sshd[6556]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:34:58.485189 sshd[6556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:34:58.588930 systemd-logind[1450]: New session 12 of user core. Apr 14 13:34:58.608458 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 13:34:59.154715 sshd[6556]: pam_unix(sshd:session): session closed for user core Apr 14 13:34:59.166951 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:41404.service: Deactivated successfully. Apr 14 13:34:59.225920 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 13:34:59.238123 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Apr 14 13:34:59.244254 systemd-logind[1450]: Removed session 12. Apr 14 13:35:04.253074 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:43270.service - OpenSSH per-connection server daemon (10.0.0.1:43270). Apr 14 13:35:04.376112 sshd[6589]: Accepted publickey for core from 10.0.0.1 port 43270 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:04.380008 sshd[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:04.445091 systemd-logind[1450]: New session 13 of user core. Apr 14 13:35:04.453934 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 13:35:05.049543 sshd[6589]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:05.082635 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:43270.service: Deactivated successfully. Apr 14 13:35:05.122856 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 13:35:05.124462 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Apr 14 13:35:05.128453 systemd-logind[1450]: Removed session 13. Apr 14 13:35:10.112839 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:51884.service - OpenSSH per-connection server daemon (10.0.0.1:51884). Apr 14 13:35:10.315096 sshd[6604]: Accepted publickey for core from 10.0.0.1 port 51884 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:10.316792 sshd[6604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:10.330731 systemd-logind[1450]: New session 14 of user core. Apr 14 13:35:10.341856 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 13:35:10.883153 sshd[6604]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:10.896018 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:51884.service: Deactivated successfully. Apr 14 13:35:10.901405 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 13:35:10.909164 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Apr 14 13:35:10.918284 systemd-logind[1450]: Removed session 14. Apr 14 13:35:15.921245 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:51896.service - OpenSSH per-connection server daemon (10.0.0.1:51896). Apr 14 13:35:15.964331 sshd[6640]: Accepted publickey for core from 10.0.0.1 port 51896 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:15.973724 sshd[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:16.084737 systemd-logind[1450]: New session 15 of user core. Apr 14 13:35:16.089145 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 13:35:16.561626 sshd[6640]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:16.571632 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:51896.service: Deactivated successfully. Apr 14 13:35:16.576702 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 13:35:16.590921 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Apr 14 13:35:16.596281 systemd-logind[1450]: Removed session 15. Apr 14 13:35:19.760731 kubelet[2519]: E0414 13:35:19.760227 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:21.628487 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Apr 14 13:35:21.754581 sshd[6655]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:21.755513 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:21.772982 systemd-logind[1450]: New session 16 of user core. Apr 14 13:35:21.788557 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 13:35:22.129183 sshd[6655]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:22.144438 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:36340.service: Deactivated successfully. Apr 14 13:35:22.147190 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 13:35:22.155204 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Apr 14 13:35:22.164436 systemd-logind[1450]: Removed session 16. Apr 14 13:35:26.729203 systemd[1]: run-containerd-runc-k8s.io-76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce-runc.UEEIui.mount: Deactivated successfully. Apr 14 13:35:27.206302 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:36350.service - OpenSSH per-connection server daemon (10.0.0.1:36350). Apr 14 13:35:27.375474 sshd[6712]: Accepted publickey for core from 10.0.0.1 port 36350 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:27.377281 sshd[6712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:27.433487 systemd-logind[1450]: New session 17 of user core. Apr 14 13:35:27.448322 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 13:35:28.123947 sshd[6712]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:28.127545 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:36350.service: Deactivated successfully. Apr 14 13:35:28.132455 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 13:35:28.141434 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Apr 14 13:35:28.142770 systemd-logind[1450]: Removed session 17. Apr 14 13:35:28.787278 kubelet[2519]: E0414 13:35:28.785603 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:33.162527 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:59544.service - OpenSSH per-connection server daemon (10.0.0.1:59544). Apr 14 13:35:33.339208 sshd[6751]: Accepted publickey for core from 10.0.0.1 port 59544 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:33.345422 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:33.375565 systemd-logind[1450]: New session 18 of user core. Apr 14 13:35:33.397422 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 13:35:34.267554 sshd[6751]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:34.287610 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:59544.service: Deactivated successfully. Apr 14 13:35:34.290776 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 13:35:34.298512 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Apr 14 13:35:34.299989 systemd-logind[1450]: Removed session 18. Apr 14 13:35:39.348201 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:59554.service - OpenSSH per-connection server daemon (10.0.0.1:59554). Apr 14 13:35:39.492322 sshd[6767]: Accepted publickey for core from 10.0.0.1 port 59554 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:39.544878 sshd[6767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:39.591949 systemd-logind[1450]: New session 19 of user core. Apr 14 13:35:39.605300 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 13:35:40.671704 sshd[6767]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:40.724743 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Apr 14 13:35:40.733322 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:59554.service: Deactivated successfully. Apr 14 13:35:40.780672 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 13:35:40.797208 systemd-logind[1450]: Removed session 19. Apr 14 13:35:41.764764 kubelet[2519]: E0414 13:35:41.764638 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:45.764858 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:41714.service - OpenSSH per-connection server daemon (10.0.0.1:41714). Apr 14 13:35:45.773823 kubelet[2519]: E0414 13:35:45.773634 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:35:45.863729 sshd[6819]: Accepted publickey for core from 10.0.0.1 port 41714 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:45.872471 sshd[6819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:45.967737 systemd-logind[1450]: New session 20 of user core. Apr 14 13:35:45.975617 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 13:35:46.747238 sshd[6819]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:46.756737 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:41714.service: Deactivated successfully. Apr 14 13:35:46.767676 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 13:35:46.785074 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Apr 14 13:35:46.789926 systemd-logind[1450]: Removed session 20. Apr 14 13:35:51.870012 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:36422.service - OpenSSH per-connection server daemon (10.0.0.1:36422). Apr 14 13:35:51.952310 sshd[6837]: Accepted publickey for core from 10.0.0.1 port 36422 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:51.955378 sshd[6837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:51.974616 systemd-logind[1450]: New session 21 of user core. Apr 14 13:35:51.994091 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 13:35:52.735310 sshd[6837]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:52.753351 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:36422.service: Deactivated successfully. Apr 14 13:35:52.758768 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 13:35:52.763258 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Apr 14 13:35:52.764390 systemd-logind[1450]: Removed session 21. Apr 14 13:35:57.854603 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:36434.service - OpenSSH per-connection server daemon (10.0.0.1:36434). Apr 14 13:35:58.257416 sshd[6902]: Accepted publickey for core from 10.0.0.1 port 36434 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:35:58.262903 sshd[6902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:35:58.298660 systemd-logind[1450]: New session 22 of user core. Apr 14 13:35:58.305400 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 13:35:59.298173 sshd[6902]: pam_unix(sshd:session): session closed for user core Apr 14 13:35:59.344905 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:36434.service: Deactivated successfully. Apr 14 13:35:59.354529 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 13:35:59.364454 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Apr 14 13:35:59.369310 systemd-logind[1450]: Removed session 22. Apr 14 13:36:01.776009 kubelet[2519]: E0414 13:36:01.775482 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:03.763018 kubelet[2519]: E0414 13:36:03.762761 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:04.361357 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:50098.service - OpenSSH per-connection server daemon (10.0.0.1:50098). Apr 14 13:36:04.530698 sshd[6942]: Accepted publickey for core from 10.0.0.1 port 50098 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:04.538624 sshd[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:04.579261 systemd-logind[1450]: New session 23 of user core. Apr 14 13:36:04.602788 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 13:36:05.236604 sshd[6942]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:05.252229 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Apr 14 13:36:05.252582 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:50098.service: Deactivated successfully. Apr 14 13:36:05.264545 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 13:36:05.267991 systemd-logind[1450]: Removed session 23. Apr 14 13:36:06.763315 kubelet[2519]: E0414 13:36:06.763204 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:10.261095 systemd[1]: run-containerd-runc-k8s.io-d46be2e19973ab1447ca282c5f3b62639688e8f4e48d36cf6afaa8c199b9f71a-runc.dvhwqv.mount: Deactivated successfully. Apr 14 13:36:10.317659 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:39096.service - OpenSSH per-connection server daemon (10.0.0.1:39096). Apr 14 13:36:10.387191 sshd[6990]: Accepted publickey for core from 10.0.0.1 port 39096 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:10.390834 sshd[6990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:10.473540 systemd-logind[1450]: New session 24 of user core. Apr 14 13:36:10.509986 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 13:36:11.525599 sshd[6990]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:11.540397 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Apr 14 13:36:11.540791 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:39096.service: Deactivated successfully. Apr 14 13:36:11.553342 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 13:36:11.584401 systemd-logind[1450]: Removed session 24. Apr 14 13:36:16.637698 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:39104.service - OpenSSH per-connection server daemon (10.0.0.1:39104). Apr 14 13:36:16.744660 sshd[7023]: Accepted publickey for core from 10.0.0.1 port 39104 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:16.746256 sshd[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:16.762091 systemd-logind[1450]: New session 25 of user core. Apr 14 13:36:16.767437 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 13:36:17.673691 sshd[7023]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:17.691688 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:39104.service: Deactivated successfully. Apr 14 13:36:17.708152 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 13:36:17.716601 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Apr 14 13:36:17.724355 systemd-logind[1450]: Removed session 25. Apr 14 13:36:22.698583 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:42964.service - OpenSSH per-connection server daemon (10.0.0.1:42964). Apr 14 13:36:22.831800 sshd[7062]: Accepted publickey for core from 10.0.0.1 port 42964 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:22.858664 sshd[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:22.894872 systemd-logind[1450]: New session 26 of user core. Apr 14 13:36:22.926680 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 13:36:23.352659 sshd[7062]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:23.376557 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:42964.service: Deactivated successfully. Apr 14 13:36:23.407242 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 13:36:23.415615 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Apr 14 13:36:23.417677 systemd-logind[1450]: Removed session 26. Apr 14 13:36:27.762262 kubelet[2519]: E0414 13:36:27.762167 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:28.375351 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:42970.service - OpenSSH per-connection server daemon (10.0.0.1:42970). Apr 14 13:36:28.593171 sshd[7101]: Accepted publickey for core from 10.0.0.1 port 42970 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:28.671392 sshd[7101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:28.708433 systemd-logind[1450]: New session 27 of user core. Apr 14 13:36:28.728608 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 13:36:29.156127 sshd[7101]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:29.165775 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:42970.service: Deactivated successfully. Apr 14 13:36:29.168084 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 13:36:29.179330 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Apr 14 13:36:29.198341 systemd-logind[1450]: Removed session 27. Apr 14 13:36:34.218379 systemd[1]: Started sshd@27-10.0.0.10:22-10.0.0.1:48744.service - OpenSSH per-connection server daemon (10.0.0.1:48744). Apr 14 13:36:34.362237 sshd[7136]: Accepted publickey for core from 10.0.0.1 port 48744 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:34.374326 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:34.389717 systemd-logind[1450]: New session 28 of user core. Apr 14 13:36:34.398424 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 13:36:35.155505 sshd[7136]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:35.165874 systemd[1]: sshd@27-10.0.0.10:22-10.0.0.1:48744.service: Deactivated successfully. Apr 14 13:36:35.233057 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 13:36:35.234623 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. Apr 14 13:36:35.236563 systemd-logind[1450]: Removed session 28. Apr 14 13:36:40.253706 systemd[1]: Started sshd@28-10.0.0.10:22-10.0.0.1:54716.service - OpenSSH per-connection server daemon (10.0.0.1:54716). Apr 14 13:36:40.499762 sshd[7168]: Accepted publickey for core from 10.0.0.1 port 54716 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:40.534582 sshd[7168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:40.561702 systemd-logind[1450]: New session 29 of user core. Apr 14 13:36:40.568478 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 13:36:41.475718 sshd[7168]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:41.487636 systemd[1]: sshd@28-10.0.0.10:22-10.0.0.1:54716.service: Deactivated successfully. Apr 14 13:36:41.530778 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 13:36:41.540584 systemd-logind[1450]: Session 29 logged out. Waiting for processes to exit. Apr 14 13:36:41.543252 systemd-logind[1450]: Removed session 29. Apr 14 13:36:43.775590 kubelet[2519]: E0414 13:36:43.775075 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:36:46.493752 systemd[1]: Started sshd@29-10.0.0.10:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Apr 14 13:36:46.635685 sshd[7189]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:46.640098 sshd[7189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:46.668661 systemd-logind[1450]: New session 30 of user core. Apr 14 13:36:46.685265 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 13:36:47.264135 sshd[7189]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:47.269404 systemd[1]: sshd@29-10.0.0.10:22-10.0.0.1:54724.service: Deactivated successfully. Apr 14 13:36:47.281524 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 13:36:47.305789 systemd-logind[1450]: Session 30 logged out. Waiting for processes to exit. Apr 14 13:36:47.319099 systemd-logind[1450]: Removed session 30. Apr 14 13:36:52.330039 systemd[1]: Started sshd@30-10.0.0.10:22-10.0.0.1:60346.service - OpenSSH per-connection server daemon (10.0.0.1:60346). Apr 14 13:36:52.538454 sshd[7251]: Accepted publickey for core from 10.0.0.1 port 60346 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:52.545796 sshd[7251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:52.561630 systemd-logind[1450]: New session 31 of user core. Apr 14 13:36:52.574467 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 13:36:53.340552 sshd[7251]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:53.344323 systemd[1]: sshd@30-10.0.0.10:22-10.0.0.1:60346.service: Deactivated successfully. Apr 14 13:36:53.355588 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 13:36:53.357609 systemd-logind[1450]: Session 31 logged out. Waiting for processes to exit. Apr 14 13:36:53.364594 systemd-logind[1450]: Removed session 31. Apr 14 13:36:58.389491 systemd[1]: Started sshd@31-10.0.0.10:22-10.0.0.1:60352.service - OpenSSH per-connection server daemon (10.0.0.1:60352). Apr 14 13:36:58.502128 sshd[7270]: Accepted publickey for core from 10.0.0.1 port 60352 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:36:58.557494 sshd[7270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:36:58.680411 systemd-logind[1450]: New session 32 of user core. Apr 14 13:36:58.685691 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 13:36:59.201595 sshd[7270]: pam_unix(sshd:session): session closed for user core Apr 14 13:36:59.205393 systemd[1]: sshd@31-10.0.0.10:22-10.0.0.1:60352.service: Deactivated successfully. Apr 14 13:36:59.210510 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 13:36:59.213699 systemd-logind[1450]: Session 32 logged out. Waiting for processes to exit. Apr 14 13:36:59.217420 systemd-logind[1450]: Removed session 32. Apr 14 13:37:04.290409 systemd[1]: Started sshd@32-10.0.0.10:22-10.0.0.1:35696.service - OpenSSH per-connection server daemon (10.0.0.1:35696). Apr 14 13:37:04.375621 sshd[7305]: Accepted publickey for core from 10.0.0.1 port 35696 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:04.380505 sshd[7305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:04.400565 systemd-logind[1450]: New session 33 of user core. Apr 14 13:37:04.468194 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 13:37:05.147327 sshd[7305]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:05.157546 systemd[1]: sshd@32-10.0.0.10:22-10.0.0.1:35696.service: Deactivated successfully. Apr 14 13:37:05.163838 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 13:37:05.164637 systemd-logind[1450]: Session 33 logged out. Waiting for processes to exit. Apr 14 13:37:05.165795 systemd-logind[1450]: Removed session 33. Apr 14 13:37:05.898911 kubelet[2519]: E0414 13:37:05.898711 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:07.908890 kubelet[2519]: E0414 13:37:07.908273 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:07.908890 kubelet[2519]: E0414 13:37:07.908531 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:10.271187 systemd[1]: Started sshd@33-10.0.0.10:22-10.0.0.1:43916.service - OpenSSH per-connection server daemon (10.0.0.1:43916). Apr 14 13:37:10.388038 sshd[7337]: Accepted publickey for core from 10.0.0.1 port 43916 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:10.405071 sshd[7337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:10.477607 systemd-logind[1450]: New session 34 of user core. Apr 14 13:37:10.497725 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 13:37:11.077565 sshd[7337]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:11.135004 systemd[1]: sshd@33-10.0.0.10:22-10.0.0.1:43916.service: Deactivated successfully. Apr 14 13:37:11.148325 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 13:37:11.155671 systemd-logind[1450]: Session 34 logged out. Waiting for processes to exit. Apr 14 13:37:11.165734 systemd-logind[1450]: Removed session 34. Apr 14 13:37:16.218645 systemd[1]: Started sshd@34-10.0.0.10:22-10.0.0.1:43924.service - OpenSSH per-connection server daemon (10.0.0.1:43924). Apr 14 13:37:16.369021 sshd[7359]: Accepted publickey for core from 10.0.0.1 port 43924 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:16.379260 sshd[7359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:16.428521 systemd-logind[1450]: New session 35 of user core. Apr 14 13:37:16.443867 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 13:37:17.195885 sshd[7359]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:17.217186 systemd[1]: sshd@34-10.0.0.10:22-10.0.0.1:43924.service: Deactivated successfully. Apr 14 13:37:17.229887 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 13:37:17.235866 systemd-logind[1450]: Session 35 logged out. Waiting for processes to exit. Apr 14 13:37:17.242553 systemd-logind[1450]: Removed session 35. Apr 14 13:37:22.218385 systemd[1]: Started sshd@35-10.0.0.10:22-10.0.0.1:56418.service - OpenSSH per-connection server daemon (10.0.0.1:56418). Apr 14 13:37:22.429745 sshd[7397]: Accepted publickey for core from 10.0.0.1 port 56418 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:22.431682 sshd[7397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:22.449680 systemd-logind[1450]: New session 36 of user core. Apr 14 13:37:22.457631 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 13:37:22.898540 sshd[7397]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:22.908059 systemd[1]: sshd@35-10.0.0.10:22-10.0.0.1:56418.service: Deactivated successfully. Apr 14 13:37:22.921690 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 13:37:22.924744 systemd-logind[1450]: Session 36 logged out. Waiting for processes to exit. Apr 14 13:37:22.928047 systemd-logind[1450]: Removed session 36. Apr 14 13:37:23.776242 kubelet[2519]: E0414 13:37:23.776014 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:27.786928 kubelet[2519]: E0414 13:37:27.786457 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:27.979517 systemd[1]: Started sshd@36-10.0.0.10:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). Apr 14 13:37:28.137712 sshd[7435]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:28.193627 sshd[7435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:28.247197 systemd-logind[1450]: New session 37 of user core. Apr 14 13:37:28.262414 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 13:37:29.078540 sshd[7435]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:29.096195 systemd[1]: sshd@36-10.0.0.10:22-10.0.0.1:56430.service: Deactivated successfully. Apr 14 13:37:29.126201 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 13:37:29.128436 systemd-logind[1450]: Session 37 logged out. Waiting for processes to exit. Apr 14 13:37:29.134436 systemd-logind[1450]: Removed session 37. Apr 14 13:37:34.154587 systemd[1]: Started sshd@37-10.0.0.10:22-10.0.0.1:39142.service - OpenSSH per-connection server daemon (10.0.0.1:39142). Apr 14 13:37:34.474213 sshd[7472]: Accepted publickey for core from 10.0.0.1 port 39142 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:34.494451 sshd[7472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:34.542298 systemd-logind[1450]: New session 38 of user core. Apr 14 13:37:34.568451 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 13:37:35.898628 sshd[7472]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:35.930365 systemd[1]: sshd@37-10.0.0.10:22-10.0.0.1:39142.service: Deactivated successfully. Apr 14 13:37:35.949478 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 13:37:35.953237 systemd-logind[1450]: Session 38 logged out. Waiting for processes to exit. Apr 14 13:37:35.973192 systemd[1]: Started sshd@38-10.0.0.10:22-10.0.0.1:39146.service - OpenSSH per-connection server daemon (10.0.0.1:39146). Apr 14 13:37:35.997381 systemd-logind[1450]: Removed session 38. Apr 14 13:37:36.307038 sshd[7492]: Accepted publickey for core from 10.0.0.1 port 39146 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:36.310227 sshd[7492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:36.329755 systemd-logind[1450]: New session 39 of user core. Apr 14 13:37:36.349617 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 13:37:38.164335 sshd[7492]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:38.274163 systemd[1]: sshd@38-10.0.0.10:22-10.0.0.1:39146.service: Deactivated successfully. Apr 14 13:37:38.291645 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 13:37:38.292827 systemd[1]: session-39.scope: Consumed 1.102s CPU time. Apr 14 13:37:38.304462 systemd-logind[1450]: Session 39 logged out. Waiting for processes to exit. Apr 14 13:37:38.334485 systemd[1]: Started sshd@39-10.0.0.10:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). Apr 14 13:37:38.350212 systemd-logind[1450]: Removed session 39. Apr 14 13:37:38.837937 sshd[7504]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:38.849769 sshd[7504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:38.947443 systemd-logind[1450]: New session 40 of user core. Apr 14 13:37:38.966701 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 13:37:40.328751 sshd[7504]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:40.361787 systemd[1]: sshd@39-10.0.0.10:22-10.0.0.1:39156.service: Deactivated successfully. Apr 14 13:37:40.386380 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 13:37:40.388911 systemd-logind[1450]: Session 40 logged out. Waiting for processes to exit. Apr 14 13:37:40.444760 systemd-logind[1450]: Removed session 40. Apr 14 13:37:45.446056 systemd[1]: Started sshd@40-10.0.0.10:22-10.0.0.1:33126.service - OpenSSH per-connection server daemon (10.0.0.1:33126). Apr 14 13:37:45.754043 sshd[7563]: Accepted publickey for core from 10.0.0.1 port 33126 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:45.754607 sshd[7563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:45.814127 systemd-logind[1450]: New session 41 of user core. Apr 14 13:37:45.826575 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 13:37:47.170102 sshd[7563]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:47.201613 systemd-logind[1450]: Session 41 logged out. Waiting for processes to exit. Apr 14 13:37:47.204059 systemd[1]: sshd@40-10.0.0.10:22-10.0.0.1:33126.service: Deactivated successfully. Apr 14 13:37:47.221207 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 13:37:47.228294 systemd-logind[1450]: Removed session 41. Apr 14 13:37:51.898295 kubelet[2519]: E0414 13:37:51.898098 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:52.347879 systemd[1]: Started sshd@41-10.0.0.10:22-10.0.0.1:60512.service - OpenSSH per-connection server daemon (10.0.0.1:60512). Apr 14 13:37:52.766183 sshd[7623]: Accepted publickey for core from 10.0.0.1 port 60512 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:52.767551 sshd[7623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:52.939782 systemd-logind[1450]: New session 42 of user core. Apr 14 13:37:52.951391 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 13:37:53.823879 kubelet[2519]: E0414 13:37:53.823625 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:37:53.949955 sshd[7623]: pam_unix(sshd:session): session closed for user core Apr 14 13:37:53.980204 systemd[1]: sshd@41-10.0.0.10:22-10.0.0.1:60512.service: Deactivated successfully. Apr 14 13:37:54.001592 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 13:37:54.036415 systemd-logind[1450]: Session 42 logged out. Waiting for processes to exit. Apr 14 13:37:54.051426 systemd-logind[1450]: Removed session 42. Apr 14 13:37:59.059857 systemd[1]: Started sshd@42-10.0.0.10:22-10.0.0.1:60516.service - OpenSSH per-connection server daemon (10.0.0.1:60516). Apr 14 13:37:59.285915 sshd[7666]: Accepted publickey for core from 10.0.0.1 port 60516 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:37:59.316801 sshd[7666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:37:59.429141 systemd-logind[1450]: New session 43 of user core. Apr 14 13:37:59.443724 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 13:38:00.395273 sshd[7666]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:00.400180 systemd[1]: sshd@42-10.0.0.10:22-10.0.0.1:60516.service: Deactivated successfully. Apr 14 13:38:00.415613 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 13:38:00.423633 systemd-logind[1450]: Session 43 logged out. Waiting for processes to exit. Apr 14 13:38:00.429695 systemd-logind[1450]: Removed session 43. Apr 14 13:38:05.554734 systemd[1]: Started sshd@43-10.0.0.10:22-10.0.0.1:59152.service - OpenSSH per-connection server daemon (10.0.0.1:59152). Apr 14 13:38:05.693576 sshd[7701]: Accepted publickey for core from 10.0.0.1 port 59152 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:05.756672 sshd[7701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:05.818161 systemd-logind[1450]: New session 44 of user core. Apr 14 13:38:05.826428 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 13:38:06.983215 sshd[7701]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:07.006733 systemd[1]: sshd@43-10.0.0.10:22-10.0.0.1:59152.service: Deactivated successfully. Apr 14 13:38:07.014552 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 13:38:07.016539 systemd-logind[1450]: Session 44 logged out. Waiting for processes to exit. Apr 14 13:38:07.030952 systemd-logind[1450]: Removed session 44. Apr 14 13:38:10.761891 kubelet[2519]: E0414 13:38:10.761788 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:12.006189 systemd[1]: Started sshd@44-10.0.0.10:22-10.0.0.1:52472.service - OpenSSH per-connection server daemon (10.0.0.1:52472). Apr 14 13:38:12.058886 sshd[7736]: Accepted publickey for core from 10.0.0.1 port 52472 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:12.062231 sshd[7736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:12.079698 systemd-logind[1450]: New session 45 of user core. Apr 14 13:38:12.088935 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 13:38:12.344942 sshd[7736]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:12.347894 systemd[1]: sshd@44-10.0.0.10:22-10.0.0.1:52472.service: Deactivated successfully. Apr 14 13:38:12.350573 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 13:38:12.351387 systemd-logind[1450]: Session 45 logged out. Waiting for processes to exit. Apr 14 13:38:12.352225 systemd-logind[1450]: Removed session 45. Apr 14 13:38:17.382294 systemd[1]: Started sshd@45-10.0.0.10:22-10.0.0.1:52486.service - OpenSSH per-connection server daemon (10.0.0.1:52486). Apr 14 13:38:17.433500 sshd[7750]: Accepted publickey for core from 10.0.0.1 port 52486 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:17.435063 sshd[7750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:17.439223 systemd-logind[1450]: New session 46 of user core. Apr 14 13:38:17.450037 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 13:38:17.618108 sshd[7750]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:17.627319 systemd[1]: sshd@45-10.0.0.10:22-10.0.0.1:52486.service: Deactivated successfully. Apr 14 13:38:17.628952 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 13:38:17.630203 systemd-logind[1450]: Session 46 logged out. Waiting for processes to exit. Apr 14 13:38:17.639183 systemd[1]: Started sshd@46-10.0.0.10:22-10.0.0.1:52492.service - OpenSSH per-connection server daemon (10.0.0.1:52492). Apr 14 13:38:17.640572 systemd-logind[1450]: Removed session 46. Apr 14 13:38:17.674569 sshd[7765]: Accepted publickey for core from 10.0.0.1 port 52492 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:17.676373 sshd[7765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:17.683550 systemd-logind[1450]: New session 47 of user core. Apr 14 13:38:17.689960 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 13:38:18.129721 sshd[7765]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:18.139082 systemd[1]: sshd@46-10.0.0.10:22-10.0.0.1:52492.service: Deactivated successfully. Apr 14 13:38:18.140494 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 13:38:18.141730 systemd-logind[1450]: Session 47 logged out. Waiting for processes to exit. Apr 14 13:38:18.142845 systemd[1]: Started sshd@47-10.0.0.10:22-10.0.0.1:52504.service - OpenSSH per-connection server daemon (10.0.0.1:52504). Apr 14 13:38:18.147506 systemd-logind[1450]: Removed session 47. Apr 14 13:38:18.183534 sshd[7779]: Accepted publickey for core from 10.0.0.1 port 52504 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:18.185136 sshd[7779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:18.189082 systemd-logind[1450]: New session 48 of user core. Apr 14 13:38:18.194711 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 14 13:38:18.945045 sshd[7779]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:18.953033 systemd[1]: sshd@47-10.0.0.10:22-10.0.0.1:52504.service: Deactivated successfully. Apr 14 13:38:18.955350 systemd[1]: session-48.scope: Deactivated successfully. Apr 14 13:38:18.956543 systemd-logind[1450]: Session 48 logged out. Waiting for processes to exit. Apr 14 13:38:18.959948 systemd[1]: Started sshd@48-10.0.0.10:22-10.0.0.1:52506.service - OpenSSH per-connection server daemon (10.0.0.1:52506). Apr 14 13:38:18.963517 systemd-logind[1450]: Removed session 48. Apr 14 13:38:19.028285 sshd[7807]: Accepted publickey for core from 10.0.0.1 port 52506 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:19.031514 sshd[7807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:19.036114 systemd-logind[1450]: New session 49 of user core. Apr 14 13:38:19.044975 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 14 13:38:19.333499 sshd[7807]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:19.355592 systemd[1]: sshd@48-10.0.0.10:22-10.0.0.1:52506.service: Deactivated successfully. Apr 14 13:38:19.358264 systemd[1]: session-49.scope: Deactivated successfully. Apr 14 13:38:19.360874 systemd-logind[1450]: Session 49 logged out. Waiting for processes to exit. Apr 14 13:38:19.366225 systemd-logind[1450]: Removed session 49. Apr 14 13:38:19.373704 systemd[1]: Started sshd@49-10.0.0.10:22-10.0.0.1:52508.service - OpenSSH per-connection server daemon (10.0.0.1:52508). Apr 14 13:38:19.463660 sshd[7820]: Accepted publickey for core from 10.0.0.1 port 52508 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:19.465121 sshd[7820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:19.471300 systemd-logind[1450]: New session 50 of user core. Apr 14 13:38:19.475970 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 14 13:38:19.630978 sshd[7820]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:19.633798 systemd[1]: sshd@49-10.0.0.10:22-10.0.0.1:52508.service: Deactivated successfully. Apr 14 13:38:19.635991 systemd[1]: session-50.scope: Deactivated successfully. Apr 14 13:38:19.637042 systemd-logind[1450]: Session 50 logged out. Waiting for processes to exit. Apr 14 13:38:19.638182 systemd-logind[1450]: Removed session 50. Apr 14 13:38:24.646547 systemd[1]: Started sshd@50-10.0.0.10:22-10.0.0.1:38832.service - OpenSSH per-connection server daemon (10.0.0.1:38832). Apr 14 13:38:24.680253 sshd[7872]: Accepted publickey for core from 10.0.0.1 port 38832 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:24.681856 sshd[7872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:24.687158 systemd-logind[1450]: New session 51 of user core. Apr 14 13:38:24.693971 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 14 13:38:24.758649 kubelet[2519]: E0414 13:38:24.758587 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:24.839405 sshd[7872]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:24.842926 systemd[1]: sshd@50-10.0.0.10:22-10.0.0.1:38832.service: Deactivated successfully. Apr 14 13:38:24.845121 systemd[1]: session-51.scope: Deactivated successfully. Apr 14 13:38:24.845788 systemd-logind[1450]: Session 51 logged out. Waiting for processes to exit. Apr 14 13:38:24.846624 systemd-logind[1450]: Removed session 51. Apr 14 13:38:26.657424 systemd[1]: run-containerd-runc-k8s.io-76df38b6c8ee0258a43ff63f78048dfb3b6a0e0ee2afc846316554ec39627cce-runc.d5ObN2.mount: Deactivated successfully. Apr 14 13:38:29.861745 systemd[1]: Started sshd@51-10.0.0.10:22-10.0.0.1:56602.service - OpenSSH per-connection server daemon (10.0.0.1:56602). Apr 14 13:38:29.902433 sshd[7908]: Accepted publickey for core from 10.0.0.1 port 56602 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:29.904047 sshd[7908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:29.920318 systemd-logind[1450]: New session 52 of user core. Apr 14 13:38:29.943013 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 14 13:38:30.071708 sshd[7908]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:30.074457 systemd[1]: sshd@51-10.0.0.10:22-10.0.0.1:56602.service: Deactivated successfully. Apr 14 13:38:30.075881 systemd[1]: session-52.scope: Deactivated successfully. Apr 14 13:38:30.077961 systemd-logind[1450]: Session 52 logged out. Waiting for processes to exit. Apr 14 13:38:30.080933 systemd-logind[1450]: Removed session 52. Apr 14 13:38:31.761796 kubelet[2519]: E0414 13:38:31.761704 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:34.758925 kubelet[2519]: E0414 13:38:34.758793 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:38:35.085648 systemd[1]: Started sshd@52-10.0.0.10:22-10.0.0.1:56618.service - OpenSSH per-connection server daemon (10.0.0.1:56618). Apr 14 13:38:35.120527 sshd[7948]: Accepted publickey for core from 10.0.0.1 port 56618 ssh2: RSA SHA256:STqg7NDKHqB1pC6cv1a9vkNfz6oKwIzfWFn4Twt++GI Apr 14 13:38:35.122458 sshd[7948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:38:35.127003 systemd-logind[1450]: New session 53 of user core. Apr 14 13:38:35.135840 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 14 13:38:35.239702 sshd[7948]: pam_unix(sshd:session): session closed for user core Apr 14 13:38:35.242353 systemd[1]: sshd@52-10.0.0.10:22-10.0.0.1:56618.service: Deactivated successfully. Apr 14 13:38:35.243683 systemd[1]: session-53.scope: Deactivated successfully. Apr 14 13:38:35.244219 systemd-logind[1450]: Session 53 logged out. Waiting for processes to exit. Apr 14 13:38:35.244898 systemd-logind[1450]: Removed session 53.