Apr 24 23:52:11.068200 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:52:11.068226 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:52:11.068236 kernel: BIOS-provided physical RAM map: Apr 24 23:52:11.068242 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 24 23:52:11.068247 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 24 23:52:11.068252 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 23:52:11.068259 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 24 23:52:11.068264 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 24 23:52:11.068269 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:52:11.068276 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 23:52:11.068282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:52:11.068287 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 23:52:11.068299 kernel: NX (Execute Disable) protection: active Apr 24 23:52:11.068305 kernel: APIC: Static calls initialized Apr 24 23:52:11.068311 kernel: SMBIOS 2.8 present. Apr 24 23:52:11.068326 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 24 23:52:11.068332 kernel: Hypervisor detected: KVM Apr 24 23:52:11.068338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:52:11.068344 kernel: kvm-clock: using sched offset of 4613569959 cycles Apr 24 23:52:11.068350 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:52:11.068356 kernel: tsc: Detected 2793.438 MHz processor Apr 24 23:52:11.068362 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:52:11.068368 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:52:11.068374 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 24 23:52:11.068381 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 23:52:11.068387 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:52:11.068393 kernel: Using GB pages for direct mapping Apr 24 23:52:11.068399 kernel: ACPI: Early table checksum verification disabled Apr 24 23:52:11.068405 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 24 23:52:11.068411 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068417 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068422 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068428 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 24 23:52:11.068435 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068441 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068447 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068452 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:52:11.068458 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 24 23:52:11.068464 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 24 23:52:11.068470 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 24 23:52:11.068479 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 24 23:52:11.068486 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 24 23:52:11.068492 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 24 23:52:11.068499 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 24 23:52:11.068504 kernel: No NUMA configuration found Apr 24 23:52:11.068510 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 24 23:52:11.068516 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 24 23:52:11.068524 kernel: Zone ranges: Apr 24 23:52:11.068530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:52:11.068536 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 24 23:52:11.068542 kernel: Normal empty Apr 24 23:52:11.068548 kernel: Movable zone start for each node Apr 24 23:52:11.068554 kernel: Early memory node ranges Apr 24 23:52:11.068560 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 23:52:11.068566 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 24 23:52:11.068572 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 24 23:52:11.068578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:52:11.068586 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 23:52:11.068598 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 24 23:52:11.068604 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:52:11.068610 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:52:11.068616 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:52:11.068622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:52:11.068628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:52:11.068634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:52:11.068640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:52:11.068647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:52:11.068654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:52:11.068659 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:52:11.068666 kernel: TSC deadline timer available Apr 24 23:52:11.068671 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 24 23:52:11.068676 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:52:11.068681 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:52:11.068686 kernel: kvm-guest: setup PV sched yield Apr 24 23:52:11.068697 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 23:52:11.068704 kernel: Booting paravirtualized kernel on KVM Apr 24 23:52:11.068710 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:52:11.068715 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 23:52:11.068720 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 24 23:52:11.068725 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 24 23:52:11.068730 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 23:52:11.068735 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:52:11.068740 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:52:11.068746 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:52:11.068753 kernel: random: crng init done Apr 24 23:52:11.068758 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:52:11.068763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:52:11.068768 kernel: Fallback order for Node 0: 0 Apr 24 23:52:11.068773 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 24 23:52:11.068778 kernel: Policy zone: DMA32 Apr 24 23:52:11.068783 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:52:11.068789 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 24 23:52:11.068796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 23:52:11.068801 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:52:11.068806 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:52:11.068811 kernel: Dynamic Preempt: voluntary Apr 24 23:52:11.068816 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:52:11.068821 kernel: rcu: RCU event tracing is enabled. Apr 24 23:52:11.068827 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 23:52:11.068832 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:52:11.068837 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:52:11.068844 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:52:11.068849 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:52:11.068854 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 23:52:11.068859 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 23:52:11.068870 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:52:11.068876 kernel: Console: colour VGA+ 80x25 Apr 24 23:52:11.068881 kernel: printk: console [ttyS0] enabled Apr 24 23:52:11.068886 kernel: ACPI: Core revision 20230628 Apr 24 23:52:11.068891 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:52:11.068898 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:52:11.068903 kernel: x2apic enabled Apr 24 23:52:11.068908 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:52:11.068913 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:52:11.068918 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:52:11.068942 kernel: kvm-guest: setup PV IPIs Apr 24 23:52:11.068948 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:52:11.068953 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:52:11.068966 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 23:52:11.068971 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:52:11.068977 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 23:52:11.068983 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 23:52:11.068990 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:52:11.068995 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:52:11.069001 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:52:11.069007 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:52:11.069014 kernel: RETBleed: Vulnerable Apr 24 23:52:11.069019 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:52:11.069025 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:52:11.069037 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:52:11.069043 kernel: active return thunk: its_return_thunk Apr 24 23:52:11.069048 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:52:11.069054 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:52:11.069059 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:52:11.069065 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:52:11.069072 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:52:11.069078 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:52:11.069084 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:52:11.069089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:52:11.069095 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:52:11.069100 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:52:11.069106 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:52:11.069111 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:52:11.069117 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:52:11.069124 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:52:11.069130 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:52:11.069136 kernel: landlock: Up and running. Apr 24 23:52:11.069141 kernel: SELinux: Initializing. Apr 24 23:52:11.069158 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:52:11.069164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:52:11.069169 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 23:52:11.069181 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:52:11.069187 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:52:11.069194 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:52:11.069200 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 23:52:11.069206 kernel: signal: max sigframe size: 3632 Apr 24 23:52:11.069211 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:52:11.069217 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:52:11.069223 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:52:11.069228 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:52:11.069234 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:52:11.069239 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 23:52:11.069246 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 23:52:11.069252 kernel: smpboot: Max logical packages: 1 Apr 24 23:52:11.069257 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 23:52:11.069263 kernel: devtmpfs: initialized Apr 24 23:52:11.069268 kernel: x86/mm: Memory block size: 128MB Apr 24 23:52:11.069274 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:52:11.069280 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 23:52:11.069285 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:52:11.069291 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:52:11.069298 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:52:11.069303 kernel: audit: type=2000 audit(1777074729.129:1): state=initialized audit_enabled=0 res=1 Apr 24 23:52:11.069309 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:52:11.069314 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:52:11.069320 kernel: cpuidle: using governor menu Apr 24 23:52:11.069326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:52:11.069331 kernel: dca service started, version 1.12.1 Apr 24 23:52:11.069337 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:52:11.069343 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:52:11.069349 kernel: PCI: Using configuration type 1 for base access Apr 24 23:52:11.069355 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:52:11.069361 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:52:11.069366 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:52:11.069372 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:52:11.069377 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:52:11.069383 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:52:11.069389 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:52:11.069394 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:52:11.069402 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:52:11.069407 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:52:11.069412 kernel: ACPI: Interpreter enabled Apr 24 23:52:11.069418 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:52:11.069423 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:52:11.069429 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:52:11.069435 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:52:11.069440 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:52:11.069446 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:52:11.069688 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:52:11.069783 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:52:11.069847 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:52:11.069855 kernel: PCI host bridge to bus 0000:00 Apr 24 23:52:11.069982 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:52:11.070041 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:52:11.070138 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:52:11.070306 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 24 23:52:11.070366 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:52:11.070421 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 24 23:52:11.070476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:52:11.070605 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:52:11.070705 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:52:11.070774 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 24 23:52:11.070835 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 24 23:52:11.070894 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 24 23:52:11.071043 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:52:11.071183 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 24 23:52:11.071273 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 24 23:52:11.071337 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 24 23:52:11.071404 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 23:52:11.072251 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 24 23:52:11.072352 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 24 23:52:11.073053 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 24 23:52:11.073139 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 23:52:11.073259 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:52:11.073336 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 24 23:52:11.074067 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 24 23:52:11.074179 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 24 23:52:11.074259 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 24 23:52:11.074371 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:52:11.074446 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:52:11.074564 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:52:11.074653 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 24 23:52:11.074734 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 24 23:52:11.074841 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:52:11.074922 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 24 23:52:11.074975 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:52:11.074985 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:52:11.074993 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:52:11.074999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:52:11.075010 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:52:11.075017 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:52:11.075024 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:52:11.075030 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:52:11.075037 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:52:11.075043 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:52:11.075050 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:52:11.075057 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:52:11.075064 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:52:11.075072 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:52:11.075078 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:52:11.075085 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:52:11.075092 kernel: iommu: Default domain type: Translated Apr 24 23:52:11.075098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:52:11.075105 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:52:11.075112 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:52:11.075118 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 24 23:52:11.075125 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 24 23:52:11.075237 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:52:11.075310 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:52:11.075382 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:52:11.075391 kernel: vgaarb: loaded Apr 24 23:52:11.075397 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:52:11.075404 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:52:11.075411 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:52:11.075420 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:52:11.075429 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:52:11.075436 kernel: pnp: PnP ACPI init Apr 24 23:52:11.075562 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:52:11.075572 kernel: pnp: PnP ACPI: found 6 devices Apr 24 23:52:11.075579 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:52:11.075585 kernel: NET: Registered PF_INET protocol family Apr 24 23:52:11.075592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:52:11.075599 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:52:11.075608 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:52:11.075615 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:52:11.075621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:52:11.075628 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:52:11.075635 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:52:11.075642 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:52:11.075649 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:52:11.075655 kernel: NET: Registered PF_XDP protocol family Apr 24 23:52:11.075730 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:52:11.075806 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:52:11.075871 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:52:11.075959 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 24 23:52:11.076024 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:52:11.076087 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 24 23:52:11.076095 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:52:11.076102 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:52:11.076109 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:52:11.076118 kernel: Initialise system trusted keyrings Apr 24 23:52:11.076125 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:52:11.076132 kernel: Key type asymmetric registered Apr 24 23:52:11.076139 kernel: Asymmetric key parser 'x509' registered Apr 24 23:52:11.076158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:52:11.076165 kernel: io scheduler mq-deadline registered Apr 24 23:52:11.076174 kernel: io scheduler kyber registered Apr 24 23:52:11.076181 kernel: io scheduler bfq registered Apr 24 23:52:11.076187 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:52:11.076197 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:52:11.076204 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:52:11.076210 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 23:52:11.076217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:52:11.076224 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:52:11.076231 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:52:11.076238 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:52:11.076244 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:52:11.076380 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 23:52:11.076393 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:52:11.076456 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 23:52:11.076517 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T23:52:10 UTC (1777074730) Apr 24 23:52:11.076576 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:52:11.076584 kernel: intel_pstate: CPU model not supported Apr 24 23:52:11.076590 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:52:11.076595 kernel: Segment Routing with IPv6 Apr 24 23:52:11.076601 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:52:11.076608 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:52:11.076614 kernel: Key type dns_resolver registered Apr 24 23:52:11.076620 kernel: IPI shorthand broadcast: enabled Apr 24 23:52:11.076626 kernel: sched_clock: Marking stable (1330008382, 179951863)->(1551886768, -41926523) Apr 24 23:52:11.076631 kernel: registered taskstats version 1 Apr 24 23:52:11.076637 kernel: Loading compiled-in X.509 certificates Apr 24 23:52:11.076643 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:52:11.076648 kernel: Key type .fscrypt registered Apr 24 23:52:11.076654 kernel: Key type fscrypt-provisioning registered Apr 24 23:52:11.076661 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:52:11.076666 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:52:11.076672 kernel: ima: No architecture policies found Apr 24 23:52:11.076678 kernel: clk: Disabling unused clocks Apr 24 23:52:11.076683 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:52:11.076689 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:52:11.076714 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:52:11.076722 kernel: Run /init as init process Apr 24 23:52:11.076728 kernel: with arguments: Apr 24 23:52:11.076733 kernel: /init Apr 24 23:52:11.076741 kernel: with environment: Apr 24 23:52:11.076747 kernel: HOME=/ Apr 24 23:52:11.076753 kernel: TERM=linux Apr 24 23:52:11.076760 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:52:11.076769 systemd[1]: Detected virtualization kvm. Apr 24 23:52:11.076775 systemd[1]: Detected architecture x86-64. Apr 24 23:52:11.076781 systemd[1]: Running in initrd. Apr 24 23:52:11.076789 systemd[1]: No hostname configured, using default hostname. Apr 24 23:52:11.076795 systemd[1]: Hostname set to . Apr 24 23:52:11.076802 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:52:11.076808 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:52:11.076814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:52:11.076820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:52:11.076826 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:52:11.076833 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:52:11.076840 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:52:11.076847 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:52:11.076864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:52:11.076870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:52:11.076876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:52:11.076884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:52:11.076891 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:52:11.076897 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:52:11.076903 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:52:11.076909 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:52:11.076916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:52:11.076922 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:52:11.078001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:52:11.078010 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:52:11.078024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:52:11.078032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:52:11.078040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:52:11.078047 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:52:11.078055 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:52:11.078063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:52:11.078071 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:52:11.078078 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:52:11.078088 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:52:11.078096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:52:11.078103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:52:11.078111 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:52:11.078119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:52:11.078167 systemd-journald[193]: Collecting audit messages is disabled. Apr 24 23:52:11.078188 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:52:11.078200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:52:11.078208 systemd-journald[193]: Journal started Apr 24 23:52:11.078227 systemd-journald[193]: Runtime Journal (/run/log/journal/49437b82970e495cb767cdba7a6ab8c1) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:52:11.077242 systemd-modules-load[194]: Inserted module 'overlay' Apr 24 23:52:11.195277 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:52:11.195318 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:52:11.195329 kernel: Bridge firewalling registered Apr 24 23:52:11.103685 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 24 23:52:11.198066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:52:11.204125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:52:11.204820 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:52:11.247338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:52:11.253846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:52:11.256688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:52:11.291834 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:52:11.302161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:52:11.303030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:52:11.306118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:52:11.322128 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:52:11.322878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:52:11.326835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:52:11.338872 dracut-cmdline[226]: dracut-dracut-053 Apr 24 23:52:11.344695 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:52:11.350853 systemd-resolved[231]: Positive Trust Anchors: Apr 24 23:52:11.350885 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:52:11.350910 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:52:11.353050 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 24 23:52:11.354467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:52:11.364383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:52:11.439008 kernel: SCSI subsystem initialized Apr 24 23:52:11.447198 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:52:11.456970 kernel: iscsi: registered transport (tcp) Apr 24 23:52:11.475978 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:52:11.476034 kernel: QLogic iSCSI HBA Driver Apr 24 23:52:11.513669 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:52:11.533083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:52:11.553465 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:52:11.553524 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:52:11.554836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:52:11.594987 kernel: raid6: avx512x4 gen() 46666 MB/s Apr 24 23:52:11.611979 kernel: raid6: avx512x2 gen() 44831 MB/s Apr 24 23:52:11.628965 kernel: raid6: avx512x1 gen() 45394 MB/s Apr 24 23:52:11.645960 kernel: raid6: avx2x4 gen() 37894 MB/s Apr 24 23:52:11.662971 kernel: raid6: avx2x2 gen() 37570 MB/s Apr 24 23:52:11.680723 kernel: raid6: avx2x1 gen() 27367 MB/s Apr 24 23:52:11.681014 kernel: raid6: using algorithm avx512x4 gen() 46666 MB/s Apr 24 23:52:11.698475 kernel: raid6: .... xor() 10262 MB/s, rmw enabled Apr 24 23:52:11.698487 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:52:11.717969 kernel: xor: automatically using best checksumming function avx Apr 24 23:52:11.869081 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:52:11.887091 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:52:11.908380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:52:11.923608 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 24 23:52:11.928064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:52:11.942302 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:52:11.972585 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Apr 24 23:52:12.000585 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:52:12.014184 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:52:12.093054 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:52:12.105914 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:52:12.117433 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:52:12.120282 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:52:12.121779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:52:12.122201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:52:12.133962 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 23:52:12.144640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:52:12.150369 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:52:12.150390 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 23:52:12.160075 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:52:12.160185 kernel: GPT:9289727 != 19775487 Apr 24 23:52:12.160202 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:52:12.160215 kernel: GPT:9289727 != 19775487 Apr 24 23:52:12.160227 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:52:12.160241 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:52:12.159081 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:52:12.167315 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:52:12.167416 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:52:12.174198 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:52:12.179088 kernel: AES CTR mode by8 optimization enabled Apr 24 23:52:12.175837 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:52:12.178840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:52:12.179063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:52:12.179733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:52:12.192949 kernel: libata version 3.00 loaded. Apr 24 23:52:12.193850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:52:12.198963 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:52:12.199966 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:52:12.203263 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:52:12.203406 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:52:12.208972 kernel: scsi host0: ahci Apr 24 23:52:12.245143 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (461) Apr 24 23:52:12.245349 kernel: scsi host1: ahci Apr 24 23:52:12.247215 kernel: scsi host2: ahci Apr 24 23:52:12.240008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 23:52:12.361644 kernel: scsi host3: ahci Apr 24 23:52:12.361887 kernel: scsi host4: ahci Apr 24 23:52:12.362119 kernel: scsi host5: ahci Apr 24 23:52:12.362414 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 24 23:52:12.362423 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 24 23:52:12.362438 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 24 23:52:12.362445 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 24 23:52:12.362453 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 24 23:52:12.362460 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 24 23:52:12.362467 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Apr 24 23:52:12.355744 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:52:12.383518 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 23:52:12.384412 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 23:52:12.391249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 23:52:12.394554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:52:12.405151 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:52:12.408288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:52:12.412955 disk-uuid[555]: Primary Header is updated. Apr 24 23:52:12.412955 disk-uuid[555]: Secondary Entries is updated. Apr 24 23:52:12.412955 disk-uuid[555]: Secondary Header is updated. Apr 24 23:52:12.417948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:52:12.420962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:52:12.424954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:52:12.435179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:52:12.611142 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:52:12.611469 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:52:12.611480 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 23:52:12.614073 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:52:12.615327 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:52:12.619256 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:52:12.620516 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 23:52:12.620600 kernel: ata3.00: applying bridge limits Apr 24 23:52:12.623421 kernel: ata3.00: configured for UDMA/100 Apr 24 23:52:12.627134 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:52:12.682639 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 23:52:12.683258 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:52:12.698021 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:52:13.427992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:52:13.428818 disk-uuid[556]: The operation has completed successfully. Apr 24 23:52:13.464810 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:52:13.465524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:52:13.484123 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:52:13.488630 sh[596]: Success Apr 24 23:52:13.502956 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:52:13.539288 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:52:13.552259 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:52:13.554643 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:52:13.565687 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:52:13.565725 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:52:13.565735 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:52:13.567232 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:52:13.569023 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:52:13.573232 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:52:13.575957 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:52:13.596732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:52:13.600497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:52:13.611577 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:52:13.611613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:52:13.611622 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:52:13.615960 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:52:13.625414 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:52:13.628979 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:52:13.635239 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:52:13.644151 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:52:13.830174 kernel: hrtimer: interrupt took 2781480 ns Apr 24 23:52:13.854044 ignition[690]: Ignition 2.19.0 Apr 24 23:52:13.854066 ignition[690]: Stage: fetch-offline Apr 24 23:52:13.854124 ignition[690]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:52:13.854133 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:52:13.854275 ignition[690]: parsed url from cmdline: "" Apr 24 23:52:13.854277 ignition[690]: no config URL provided Apr 24 23:52:13.854281 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:52:13.854287 ignition[690]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:52:13.854308 ignition[690]: op(1): [started] loading QEMU firmware config module Apr 24 23:52:13.854312 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 23:52:13.866677 ignition[690]: op(1): [finished] loading QEMU firmware config module Apr 24 23:52:13.873578 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:52:13.888144 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:52:13.907154 systemd-networkd[784]: lo: Link UP Apr 24 23:52:13.907168 systemd-networkd[784]: lo: Gained carrier Apr 24 23:52:13.908497 systemd-networkd[784]: Enumeration completed Apr 24 23:52:13.909025 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:52:13.910426 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:52:13.910428 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:52:13.911305 systemd[1]: Reached target network.target - Network. Apr 24 23:52:13.911524 systemd-networkd[784]: eth0: Link UP Apr 24 23:52:13.911527 systemd-networkd[784]: eth0: Gained carrier Apr 24 23:52:13.911533 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:52:13.929033 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:52:13.973755 ignition[690]: parsing config with SHA512: dfbda3de1d0e9f983a923eccc978d212263b8e7aa8f2fb1b32579c52f1cee18e43c445961cd6b631633108ec16b40c073639a07dae489b72021e2547b9769b21 Apr 24 23:52:13.979781 unknown[690]: fetched base config from "system" Apr 24 23:52:13.979793 unknown[690]: fetched user config from "qemu" Apr 24 23:52:13.980159 ignition[690]: fetch-offline: fetch-offline passed Apr 24 23:52:13.982105 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:52:13.980258 ignition[690]: Ignition finished successfully Apr 24 23:52:13.984178 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 23:52:13.997075 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:52:14.015342 ignition[788]: Ignition 2.19.0 Apr 24 23:52:14.015354 ignition[788]: Stage: kargs Apr 24 23:52:14.015569 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:52:14.015577 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:52:14.018443 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:52:14.016503 ignition[788]: kargs: kargs passed Apr 24 23:52:14.016537 ignition[788]: Ignition finished successfully Apr 24 23:52:14.028175 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:52:14.049435 ignition[796]: Ignition 2.19.0 Apr 24 23:52:14.049450 ignition[796]: Stage: disks Apr 24 23:52:14.049610 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:52:14.049619 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:52:14.050312 ignition[796]: disks: disks passed Apr 24 23:52:14.053420 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:52:14.050347 ignition[796]: Ignition finished successfully Apr 24 23:52:14.055953 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:52:14.058358 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:52:14.060742 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:52:14.063307 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:52:14.066099 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:52:14.081781 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:52:14.097110 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:52:14.100970 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:52:14.114584 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:52:14.234971 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:52:14.235748 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:52:14.236808 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:52:14.253019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:52:14.254991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:52:14.260979 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Apr 24 23:52:14.257317 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:52:14.257354 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:52:14.257374 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:52:14.262018 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:52:14.266338 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:52:14.276498 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:52:14.276523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:52:14.276532 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:52:14.279959 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:52:14.281208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:52:14.299277 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:52:14.302579 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:52:14.305423 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:52:14.309429 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:52:14.385704 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:52:14.392177 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:52:14.394054 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:52:14.400998 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:52:14.414538 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:52:14.423124 ignition[930]: INFO : Ignition 2.19.0 Apr 24 23:52:14.423124 ignition[930]: INFO : Stage: mount Apr 24 23:52:14.427002 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:52:14.427002 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:52:14.427002 ignition[930]: INFO : mount: mount passed Apr 24 23:52:14.427002 ignition[930]: INFO : Ignition finished successfully Apr 24 23:52:14.424764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:52:14.435099 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:52:14.568173 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:52:14.579151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:52:14.588059 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Apr 24 23:52:14.588230 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:52:14.590461 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:52:14.590481 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:52:14.594985 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:52:14.596246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:52:14.627341 ignition[959]: INFO : Ignition 2.19.0 Apr 24 23:52:14.627341 ignition[959]: INFO : Stage: files Apr 24 23:52:14.630095 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:52:14.630095 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:52:14.630095 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:52:14.630095 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:52:14.630095 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:52:14.639717 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:52:14.639717 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:52:14.639717 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:52:14.639717 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:52:14.639717 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:52:14.633514 unknown[959]: wrote ssh authorized keys file for user: core Apr 24 23:52:14.698543 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:52:14.857847 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:52:14.860787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:52:14.891335 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:52:14.891335 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:52:14.891335 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 24 23:52:15.132899 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:52:15.383644 systemd-networkd[784]: eth0: Gained IPv6LL Apr 24 23:52:16.196034 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:52:16.196034 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:52:16.202979 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 23:52:16.232919 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:52:16.238862 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:52:16.243108 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 23:52:16.243108 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:52:16.243108 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:52:16.243108 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:52:16.243108 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:52:16.243108 ignition[959]: INFO : files: files passed Apr 24 23:52:16.243108 ignition[959]: INFO : Ignition finished successfully Apr 24 23:52:16.243203 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:52:16.261248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:52:16.266061 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:52:16.269873 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:52:16.269990 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:52:16.275857 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 23:52:16.281298 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:52:16.281298 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:52:16.285468 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:52:16.290015 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:52:16.290819 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:52:16.308175 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:52:16.338212 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:52:16.338393 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:52:16.341287 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:52:16.341762 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:52:16.345510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:52:16.349497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:52:16.369178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:52:16.386075 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:52:16.399562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:52:16.400315 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:52:16.403467 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:52:16.406392 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:52:16.406569 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:52:16.411286 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:52:16.413862 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:52:16.414578 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:52:16.418484 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:52:16.423333 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:52:16.425605 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:52:16.430705 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:52:16.431391 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:52:16.434633 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:52:16.439723 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:52:16.440355 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:52:16.440630 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:52:16.445171 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:52:16.448092 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:52:16.451237 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:52:16.453601 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:52:16.456850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:52:16.457133 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:52:16.460735 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:52:16.460980 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:52:16.463640 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:52:16.467087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:52:16.471701 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:52:16.472631 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:52:16.476670 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:52:16.480603 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:52:16.480720 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:52:16.481553 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:52:16.481613 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:52:16.485086 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:52:16.485258 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:52:16.487567 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:52:16.487682 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:52:16.505241 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:52:16.508268 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:52:16.509473 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:52:16.509598 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:52:16.512367 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:52:16.512469 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:52:16.520058 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:52:16.520145 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:52:16.530125 ignition[1013]: INFO : Ignition 2.19.0 Apr 24 23:52:16.530125 ignition[1013]: INFO : Stage: umount Apr 24 23:52:16.532355 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:52:16.532355 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:52:16.532355 ignition[1013]: INFO : umount: umount passed Apr 24 23:52:16.532355 ignition[1013]: INFO : Ignition finished successfully Apr 24 23:52:16.532387 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:52:16.532778 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:52:16.532870 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:52:16.537118 systemd[1]: Stopped target network.target - Network. Apr 24 23:52:16.538600 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:52:16.538661 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:52:16.542278 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:52:16.542427 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:52:16.542728 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:52:16.542841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:52:16.546374 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:52:16.546464 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:52:16.548886 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:52:16.551308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:52:16.565240 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:52:16.565779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:52:16.569627 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:52:16.569665 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:52:16.572390 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 24 23:52:16.575382 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:52:16.575993 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:52:16.577249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:52:16.577300 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:52:16.594373 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:52:16.597473 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:52:16.597599 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:52:16.601031 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:52:16.601114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:52:16.601986 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:52:16.602082 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:52:16.606518 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:52:16.611333 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:52:16.611440 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:52:16.614766 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:52:16.614805 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:52:16.623688 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:52:16.623907 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:52:16.627819 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:52:16.629094 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:52:16.632758 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:52:16.632812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:52:16.635440 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:52:16.635488 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:52:16.637963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:52:16.638001 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:52:16.641125 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:52:16.641157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:52:16.645995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:52:16.646143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:52:16.664858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:52:16.668440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:52:16.668543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:52:16.671362 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:52:16.671401 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:52:16.672260 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:52:16.672289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:52:16.676753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:52:16.676882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:52:16.681345 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:52:16.681438 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:52:16.683669 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:52:16.689862 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:52:16.701567 systemd[1]: Switching root. Apr 24 23:52:16.732426 systemd-journald[193]: Journal stopped Apr 24 23:52:17.700378 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 24 23:52:17.700459 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:52:17.700478 kernel: SELinux: policy capability open_perms=1 Apr 24 23:52:17.700493 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:52:17.700504 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:52:17.700516 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:52:17.700528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:52:17.700538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:52:17.700550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:52:17.700562 kernel: audit: type=1403 audit(1777074736.840:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:52:17.700576 systemd[1]: Successfully loaded SELinux policy in 33.281ms. Apr 24 23:52:17.700599 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.735ms. Apr 24 23:52:17.700617 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:52:17.700636 systemd[1]: Detected virtualization kvm. Apr 24 23:52:17.700650 systemd[1]: Detected architecture x86-64. Apr 24 23:52:17.700663 systemd[1]: Detected first boot. Apr 24 23:52:17.700680 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:52:17.700697 zram_generator::config[1057]: No configuration found. Apr 24 23:52:17.700714 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:52:17.700728 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:52:17.700743 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:52:17.700755 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:52:17.700768 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:52:17.700780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:52:17.700792 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:52:17.700805 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:52:17.700818 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:52:17.700832 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:52:17.700846 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:52:17.700866 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:52:17.700880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:52:17.700894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:52:17.700906 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:52:17.700919 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:52:17.700964 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:52:17.700978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:52:17.700992 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:52:17.701019 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:52:17.701029 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:52:17.701037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:52:17.701046 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:52:17.701054 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:52:17.701062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:52:17.701071 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:52:17.701079 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:52:17.701090 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:52:17.701098 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:52:17.701106 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:52:17.701114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:52:17.701123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:52:17.701131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:52:17.701139 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:52:17.701153 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:52:17.701169 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:52:17.701186 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:52:17.701264 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:17.701274 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:52:17.701289 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:52:17.701304 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:52:17.701320 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:52:17.701334 systemd[1]: Reached target machines.target - Containers. Apr 24 23:52:17.701350 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:52:17.701364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:52:17.701378 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:52:17.701386 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:52:17.701394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:52:17.701409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:52:17.701423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:52:17.701438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:52:17.701453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:52:17.701472 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:52:17.701489 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:52:17.701501 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:52:17.701517 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:52:17.701532 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:52:17.701547 kernel: fuse: init (API version 7.39) Apr 24 23:52:17.701562 kernel: loop: module loaded Apr 24 23:52:17.701575 kernel: ACPI: bus type drm_connector registered Apr 24 23:52:17.701589 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:52:17.701604 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:52:17.701622 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:52:17.701637 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:52:17.701651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:52:17.701666 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:52:17.701680 systemd[1]: Stopped verity-setup.service. Apr 24 23:52:17.701695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:17.701734 systemd-journald[1127]: Collecting audit messages is disabled. Apr 24 23:52:17.701761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:52:17.701778 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:52:17.701794 systemd-journald[1127]: Journal started Apr 24 23:52:17.701824 systemd-journald[1127]: Runtime Journal (/run/log/journal/49437b82970e495cb767cdba7a6ab8c1) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:52:17.231058 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:52:17.394197 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 23:52:17.394867 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:52:17.705303 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:52:17.706601 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:52:17.708086 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:52:17.709622 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:52:17.711113 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:52:17.712590 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:52:17.714325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:52:17.716130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:52:17.716280 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:52:17.718051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:52:17.718170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:52:17.719801 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:52:17.719922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:52:17.721519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:52:17.721638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:52:17.723372 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:52:17.723489 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:52:17.725056 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:52:17.725181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:52:17.726765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:52:17.728386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:52:17.730225 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:52:17.742288 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:52:17.757111 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:52:17.760570 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:52:17.762099 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:52:17.762186 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:52:17.766211 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:52:17.769087 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:52:17.773326 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:52:17.774910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:52:17.776570 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:52:17.778694 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:52:17.780195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:52:17.780963 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:52:17.781626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:52:17.785044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:52:17.789279 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:52:17.794385 systemd-journald[1127]: Time spent on flushing to /var/log/journal/49437b82970e495cb767cdba7a6ab8c1 is 142.509ms for 953 entries. Apr 24 23:52:17.794385 systemd-journald[1127]: System Journal (/var/log/journal/49437b82970e495cb767cdba7a6ab8c1) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:52:17.969144 systemd-journald[1127]: Received client request to flush runtime journal. Apr 24 23:52:17.969224 kernel: loop0: detected capacity change from 0 to 142488 Apr 24 23:52:17.796109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:52:17.799619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:52:17.801699 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:52:17.803399 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:52:17.805260 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:52:17.951188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:52:17.953473 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:52:17.956222 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:52:17.965450 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:52:17.970702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:52:17.974488 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:52:17.981968 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:52:17.990557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:52:17.988398 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Apr 24 23:52:17.988411 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Apr 24 23:52:17.990672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:52:17.991167 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:52:17.994263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:52:18.002123 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:52:18.017071 kernel: loop1: detected capacity change from 0 to 140768 Apr 24 23:52:18.043347 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:52:18.057880 kernel: loop2: detected capacity change from 0 to 217752 Apr 24 23:52:18.057309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:52:18.103690 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 24 23:52:18.104035 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 24 23:52:18.199634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:52:18.203961 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:52:18.223947 kernel: loop4: detected capacity change from 0 to 140768 Apr 24 23:52:18.241091 kernel: loop5: detected capacity change from 0 to 217752 Apr 24 23:52:18.251711 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 23:52:18.255389 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 24 23:52:18.262629 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:52:18.262681 systemd[1]: Reloading... Apr 24 23:52:18.319354 zram_generator::config[1230]: No configuration found. Apr 24 23:52:18.462799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:52:18.498432 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:52:18.500810 systemd[1]: Reloading finished in 237 ms. Apr 24 23:52:18.534076 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:52:18.539002 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:52:18.555067 systemd[1]: Starting ensure-sysext.service... Apr 24 23:52:18.556989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:52:18.562733 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:52:18.562753 systemd[1]: Reloading... Apr 24 23:52:18.584917 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:52:18.585230 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:52:18.585797 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:52:18.585999 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 24 23:52:18.586042 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 24 23:52:18.589367 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:52:18.589383 systemd-tmpfiles[1264]: Skipping /boot Apr 24 23:52:18.700686 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:52:18.700701 systemd-tmpfiles[1264]: Skipping /boot Apr 24 23:52:18.738818 zram_generator::config[1303]: No configuration found. Apr 24 23:52:18.850489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:52:18.884211 systemd[1]: Reloading finished in 321 ms. Apr 24 23:52:18.901176 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:52:18.914351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:52:18.922035 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:52:18.925028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:52:18.927531 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:52:18.934230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:52:18.940324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:52:18.943331 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:52:18.947567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:18.947764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:52:18.950224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:52:18.956206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:52:18.959006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:52:18.960673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:52:18.968086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:52:18.969571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:18.970381 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:52:18.972547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:52:18.972786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:52:18.975148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:52:18.975344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:52:18.977474 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:52:18.977615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:52:18.984810 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:52:18.989998 augenrules[1359]: No rules Apr 24 23:52:18.991020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:18.991186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:52:18.992628 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Apr 24 23:52:18.997171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:52:19.001220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:52:19.003786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:52:19.005538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:52:19.009036 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:52:19.010451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:19.011331 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:52:19.015835 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:52:19.017747 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:52:19.019732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:52:19.019857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:52:19.021684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:52:19.021802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:52:19.023818 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:52:19.023953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:52:19.027410 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:52:19.029335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:52:19.038217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:19.038476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:52:19.044722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:52:19.054228 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:52:19.057443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:52:19.062183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:52:19.063665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:52:19.067193 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:52:19.068572 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:52:19.068698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:52:19.069624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:52:19.069778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:52:19.073416 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:52:19.073542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:52:19.075247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:52:19.075375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:52:19.076329 systemd-resolved[1334]: Positive Trust Anchors: Apr 24 23:52:19.076348 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:52:19.076372 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:52:19.078297 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:52:19.078414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:52:19.082515 systemd[1]: Finished ensure-sysext.service. Apr 24 23:52:19.088969 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1378) Apr 24 23:52:19.094723 systemd-resolved[1334]: Defaulting to hostname 'linux'. Apr 24 23:52:19.104097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:52:19.105725 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:52:19.107786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:52:19.131743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:52:19.132102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:52:19.143103 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:52:19.192319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:52:19.200316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:52:19.213043 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:52:19.213409 systemd-networkd[1402]: lo: Link UP Apr 24 23:52:19.213425 systemd-networkd[1402]: lo: Gained carrier Apr 24 23:52:19.216565 systemd-networkd[1402]: Enumeration completed Apr 24 23:52:19.216747 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:52:19.217490 systemd[1]: Reached target network.target - Network. Apr 24 23:52:19.219272 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:52:19.219287 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:52:19.220449 systemd-networkd[1402]: eth0: Link UP Apr 24 23:52:19.220463 systemd-networkd[1402]: eth0: Gained carrier Apr 24 23:52:19.220474 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:52:19.226144 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:52:19.227980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:52:19.231672 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:52:19.235308 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:52:19.238480 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:52:19.240153 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Apr 24 23:52:19.242600 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 23:52:19.243051 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:52:19.243017 systemd-timesyncd[1413]: Initial clock synchronization to Fri 2026-04-24 23:52:19.043636 UTC. Apr 24 23:52:19.251098 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:52:19.251541 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:52:19.251689 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:52:19.251760 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:52:19.260155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:52:19.278989 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:52:19.566052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:52:19.577875 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:52:19.596096 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:52:19.609994 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:52:19.639769 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:52:19.641913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:52:19.643700 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:52:19.645344 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:52:19.648729 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:52:19.651035 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:52:19.652611 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:52:19.654388 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:52:19.656865 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:52:19.657056 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:52:19.658360 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:52:19.662489 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:52:19.669822 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:52:19.680982 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:52:19.683723 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:52:19.685647 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:52:19.687104 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:52:19.687644 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:52:19.689586 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:52:19.689615 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:52:19.690467 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:52:19.692632 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:52:19.693902 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:52:19.697084 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:52:19.699494 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:52:19.701671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:52:19.702163 jq[1442]: false Apr 24 23:52:19.704186 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:52:19.709038 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:52:19.718663 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:52:19.719505 extend-filesystems[1443]: Found loop3 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found loop4 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found loop5 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found sr0 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda1 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda2 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda3 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found usr Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda4 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda6 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda7 Apr 24 23:52:19.719505 extend-filesystems[1443]: Found vda9 Apr 24 23:52:19.719505 extend-filesystems[1443]: Checking size of /dev/vda9 Apr 24 23:52:19.788852 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 23:52:19.788894 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 23:52:19.788905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1380) Apr 24 23:52:19.788915 extend-filesystems[1443]: Resized partition /dev/vda9 Apr 24 23:52:19.725484 dbus-daemon[1441]: [system] SELinux support is enabled Apr 24 23:52:19.724136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:52:19.793404 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:52:19.793404 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 23:52:19.793404 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 23:52:19.793404 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 23:52:19.729100 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:52:19.806643 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Apr 24 23:52:19.737242 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:52:19.810607 update_engine[1461]: I20260424 23:52:19.771533 1461 main.cc:92] Flatcar Update Engine starting Apr 24 23:52:19.810607 update_engine[1461]: I20260424 23:52:19.772733 1461 update_check_scheduler.cc:74] Next update check in 9m51s Apr 24 23:52:19.737738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:52:19.810839 jq[1463]: true Apr 24 23:52:19.739352 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:52:19.743140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:52:19.750124 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:52:19.754532 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:52:19.768521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:52:19.768767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:52:19.769170 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:52:19.769299 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:52:19.778328 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:52:19.778502 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:52:19.790805 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:52:19.791048 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:52:19.796551 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:52:19.796564 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:52:19.800341 systemd-logind[1458]: New seat seat0. Apr 24 23:52:19.806089 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:52:19.811606 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:52:19.812130 jq[1468]: true Apr 24 23:52:19.822285 tar[1467]: linux-amd64/LICENSE Apr 24 23:52:19.825292 tar[1467]: linux-amd64/helm Apr 24 23:52:19.823712 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 23:52:19.831377 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:52:19.834857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:52:19.835016 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:52:19.836777 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:52:19.836984 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:52:19.845183 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:52:19.995150 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:52:19.998802 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:52:20.001444 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:52:20.006577 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:52:20.018181 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:52:20.029192 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:52:20.034878 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:52:20.035112 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:52:20.063828 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:52:20.067391 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:52:20.098538 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:52:20.107186 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:52:20.113382 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:52:20.114997 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:52:20.446048 containerd[1469]: time="2026-04-24T23:52:20.445695481Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:52:20.467872 containerd[1469]: time="2026-04-24T23:52:20.467717321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.471899 containerd[1469]: time="2026-04-24T23:52:20.471840865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:52:20.471899 containerd[1469]: time="2026-04-24T23:52:20.471886373Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:52:20.472003 containerd[1469]: time="2026-04-24T23:52:20.471941419Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:52:20.472289 containerd[1469]: time="2026-04-24T23:52:20.472255095Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:52:20.472311 containerd[1469]: time="2026-04-24T23:52:20.472296157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472364 containerd[1469]: time="2026-04-24T23:52:20.472346193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472388 containerd[1469]: time="2026-04-24T23:52:20.472364939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472670 containerd[1469]: time="2026-04-24T23:52:20.472637205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472670 containerd[1469]: time="2026-04-24T23:52:20.472662977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472701 containerd[1469]: time="2026-04-24T23:52:20.472674057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472701 containerd[1469]: time="2026-04-24T23:52:20.472683562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.472782 containerd[1469]: time="2026-04-24T23:52:20.472765578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.473162 containerd[1469]: time="2026-04-24T23:52:20.473131634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:52:20.473292 containerd[1469]: time="2026-04-24T23:52:20.473232396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:52:20.473292 containerd[1469]: time="2026-04-24T23:52:20.473284462Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:52:20.473383 containerd[1469]: time="2026-04-24T23:52:20.473366131Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:52:20.473438 containerd[1469]: time="2026-04-24T23:52:20.473424436Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:52:20.478120 containerd[1469]: time="2026-04-24T23:52:20.478072813Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:52:20.478219 containerd[1469]: time="2026-04-24T23:52:20.478193793Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:52:20.478274 containerd[1469]: time="2026-04-24T23:52:20.478250775Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:52:20.478294 containerd[1469]: time="2026-04-24T23:52:20.478273033Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:52:20.478294 containerd[1469]: time="2026-04-24T23:52:20.478286949Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:52:20.478489 containerd[1469]: time="2026-04-24T23:52:20.478433869Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:52:20.478710 containerd[1469]: time="2026-04-24T23:52:20.478694952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:52:20.478838 containerd[1469]: time="2026-04-24T23:52:20.478814110Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:52:20.478838 containerd[1469]: time="2026-04-24T23:52:20.478832173Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:52:20.478893 containerd[1469]: time="2026-04-24T23:52:20.478844020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:52:20.478893 containerd[1469]: time="2026-04-24T23:52:20.478854183Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478893 containerd[1469]: time="2026-04-24T23:52:20.478864306Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478893 containerd[1469]: time="2026-04-24T23:52:20.478873624Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478893 containerd[1469]: time="2026-04-24T23:52:20.478885005Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478986 containerd[1469]: time="2026-04-24T23:52:20.478917564Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478986 containerd[1469]: time="2026-04-24T23:52:20.478959462Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478986 containerd[1469]: time="2026-04-24T23:52:20.478971557Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.478986 containerd[1469]: time="2026-04-24T23:52:20.478979878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:52:20.479042 containerd[1469]: time="2026-04-24T23:52:20.479034564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479058 containerd[1469]: time="2026-04-24T23:52:20.479052735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479115 containerd[1469]: time="2026-04-24T23:52:20.479063018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479115 containerd[1469]: time="2026-04-24T23:52:20.479074029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479115 containerd[1469]: time="2026-04-24T23:52:20.479089969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479115 containerd[1469]: time="2026-04-24T23:52:20.479114152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479173 containerd[1469]: time="2026-04-24T23:52:20.479126531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479173 containerd[1469]: time="2026-04-24T23:52:20.479139258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479173 containerd[1469]: time="2026-04-24T23:52:20.479148813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479220 containerd[1469]: time="2026-04-24T23:52:20.479171408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479220 containerd[1469]: time="2026-04-24T23:52:20.479187793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479220 containerd[1469]: time="2026-04-24T23:52:20.479218350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479257 containerd[1469]: time="2026-04-24T23:52:20.479228488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479257 containerd[1469]: time="2026-04-24T23:52:20.479253334Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:52:20.479302 containerd[1469]: time="2026-04-24T23:52:20.479283600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479302 containerd[1469]: time="2026-04-24T23:52:20.479293652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479334 containerd[1469]: time="2026-04-24T23:52:20.479301611Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:52:20.479396 containerd[1469]: time="2026-04-24T23:52:20.479370259Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:52:20.479396 containerd[1469]: time="2026-04-24T23:52:20.479386150Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:52:20.479396 containerd[1469]: time="2026-04-24T23:52:20.479394095Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:52:20.479478 containerd[1469]: time="2026-04-24T23:52:20.479402115Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:52:20.479478 containerd[1469]: time="2026-04-24T23:52:20.479409144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.479478 containerd[1469]: time="2026-04-24T23:52:20.479447394Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:52:20.479530 containerd[1469]: time="2026-04-24T23:52:20.479482947Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:52:20.479530 containerd[1469]: time="2026-04-24T23:52:20.479490726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:52:20.480020 containerd[1469]: time="2026-04-24T23:52:20.479953248Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:52:20.480020 containerd[1469]: time="2026-04-24T23:52:20.480009258Z" level=info msg="Connect containerd service" Apr 24 23:52:20.480351 containerd[1469]: time="2026-04-24T23:52:20.480057278Z" level=info msg="using legacy CRI server" Apr 24 23:52:20.480351 containerd[1469]: time="2026-04-24T23:52:20.480063928Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:52:20.480351 containerd[1469]: time="2026-04-24T23:52:20.480264374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:52:20.481076 containerd[1469]: time="2026-04-24T23:52:20.481040987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:52:20.481403 containerd[1469]: time="2026-04-24T23:52:20.481329545Z" level=info msg="Start subscribing containerd event" Apr 24 23:52:20.481455 containerd[1469]: time="2026-04-24T23:52:20.481423894Z" level=info msg="Start recovering state" Apr 24 23:52:20.481545 containerd[1469]: time="2026-04-24T23:52:20.481499996Z" level=info msg="Start event monitor" Apr 24 23:52:20.481545 containerd[1469]: time="2026-04-24T23:52:20.481517765Z" level=info msg="Start snapshots syncer" Apr 24 23:52:20.481842 containerd[1469]: time="2026-04-24T23:52:20.481801952Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:52:20.481842 containerd[1469]: time="2026-04-24T23:52:20.481837402Z" level=info msg="Start streaming server" Apr 24 23:52:20.483829 containerd[1469]: time="2026-04-24T23:52:20.482093373Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:52:20.483829 containerd[1469]: time="2026-04-24T23:52:20.482156854Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:52:20.482364 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:52:20.484816 containerd[1469]: time="2026-04-24T23:52:20.484794354Z" level=info msg="containerd successfully booted in 0.040555s" Apr 24 23:52:20.580798 tar[1467]: linux-amd64/README.md Apr 24 23:52:20.594345 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:52:21.015990 systemd-networkd[1402]: eth0: Gained IPv6LL Apr 24 23:52:21.021876 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:52:21.024188 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:52:21.035144 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 23:52:21.041285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:52:21.044174 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:52:21.057621 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 23:52:21.057772 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 23:52:21.059560 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:52:21.064536 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:52:22.420196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:22.422377 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:52:22.424226 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:52:22.424596 systemd[1]: Startup finished in 1.491s (kernel) + 6.113s (initrd) + 5.615s (userspace) = 13.220s. Apr 24 23:52:23.083396 kubelet[1556]: E0424 23:52:23.082099 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:52:23.089408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:52:23.089602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:52:23.090673 systemd[1]: kubelet.service: Consumed 1.832s CPU time. Apr 24 23:52:24.541222 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:52:24.543146 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:44356.service - OpenSSH per-connection server daemon (10.0.0.1:44356). Apr 24 23:52:24.601133 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 44356 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:24.603428 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:24.613237 systemd-logind[1458]: New session 1 of user core. Apr 24 23:52:24.614196 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:52:24.624215 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:52:24.641617 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:52:24.654195 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:52:24.656620 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:52:24.744519 systemd[1573]: Queued start job for default target default.target. Apr 24 23:52:24.752977 systemd[1573]: Created slice app.slice - User Application Slice. Apr 24 23:52:24.753004 systemd[1573]: Reached target paths.target - Paths. Apr 24 23:52:24.753013 systemd[1573]: Reached target timers.target - Timers. Apr 24 23:52:24.754492 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:52:24.765219 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:52:24.765315 systemd[1573]: Reached target sockets.target - Sockets. Apr 24 23:52:24.765324 systemd[1573]: Reached target basic.target - Basic System. Apr 24 23:52:24.765351 systemd[1573]: Reached target default.target - Main User Target. Apr 24 23:52:24.765371 systemd[1573]: Startup finished in 98ms. Apr 24 23:52:24.765792 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:52:24.769311 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:52:24.839043 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:44372.service - OpenSSH per-connection server daemon (10.0.0.1:44372). Apr 24 23:52:24.894882 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 44372 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:24.897102 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:24.906481 systemd-logind[1458]: New session 2 of user core. Apr 24 23:52:24.921907 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:52:24.984519 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 24 23:52:24.999257 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:44372.service: Deactivated successfully. Apr 24 23:52:25.002364 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:52:25.004990 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:52:25.010797 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:44386.service - OpenSSH per-connection server daemon (10.0.0.1:44386). Apr 24 23:52:25.012601 systemd-logind[1458]: Removed session 2. Apr 24 23:52:25.070263 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 44386 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:25.072408 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:25.081095 systemd-logind[1458]: New session 3 of user core. Apr 24 23:52:25.103145 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:52:25.156621 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 24 23:52:25.172570 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:44386.service: Deactivated successfully. Apr 24 23:52:25.174089 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:52:25.175504 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:52:25.189216 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:44392.service - OpenSSH per-connection server daemon (10.0.0.1:44392). Apr 24 23:52:25.189856 systemd-logind[1458]: Removed session 3. Apr 24 23:52:25.222915 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 44392 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:25.224726 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:25.241755 systemd-logind[1458]: New session 4 of user core. Apr 24 23:52:25.251184 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:52:25.326743 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 24 23:52:25.338030 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:44392.service: Deactivated successfully. Apr 24 23:52:25.339408 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:52:25.340622 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:52:25.351189 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:44404.service - OpenSSH per-connection server daemon (10.0.0.1:44404). Apr 24 23:52:25.352106 systemd-logind[1458]: Removed session 4. Apr 24 23:52:25.387010 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:25.388493 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:25.394824 systemd-logind[1458]: New session 5 of user core. Apr 24 23:52:25.405099 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:52:25.471388 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:52:25.471795 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:52:25.488649 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 24 23:52:25.491850 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 24 23:52:25.504973 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:44404.service: Deactivated successfully. Apr 24 23:52:25.510117 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:52:25.512904 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:52:25.524215 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:44418.service - OpenSSH per-connection server daemon (10.0.0.1:44418). Apr 24 23:52:25.524868 systemd-logind[1458]: Removed session 5. Apr 24 23:52:25.563897 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 44418 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:25.567669 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:25.573545 systemd-logind[1458]: New session 6 of user core. Apr 24 23:52:25.586097 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:52:25.642312 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:52:25.642666 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:52:25.650209 sudo[1617]: pam_unix(sudo:session): session closed for user root Apr 24 23:52:25.670391 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:52:25.670835 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:52:25.708268 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:52:25.711015 auditctl[1620]: No rules Apr 24 23:52:25.711281 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:52:25.711577 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:52:25.714006 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:52:25.746788 augenrules[1638]: No rules Apr 24 23:52:25.748229 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:52:25.749597 sudo[1616]: pam_unix(sudo:session): session closed for user root Apr 24 23:52:25.752579 sshd[1613]: pam_unix(sshd:session): session closed for user core Apr 24 23:52:25.766493 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:44418.service: Deactivated successfully. Apr 24 23:52:25.768197 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:52:25.769666 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:52:25.770968 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:44420.service - OpenSSH per-connection server daemon (10.0.0.1:44420). Apr 24 23:52:25.771576 systemd-logind[1458]: Removed session 6. Apr 24 23:52:25.810544 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 44420 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:52:25.812870 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:52:25.818639 systemd-logind[1458]: New session 7 of user core. Apr 24 23:52:25.830037 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:52:25.906548 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:52:25.909137 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:52:26.896355 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:52:26.896364 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:52:27.762740 dockerd[1667]: time="2026-04-24T23:52:27.762421474Z" level=info msg="Starting up" Apr 24 23:52:28.088717 systemd[1]: var-lib-docker-metacopy\x2dcheck1379848549-merged.mount: Deactivated successfully. Apr 24 23:52:28.111177 dockerd[1667]: time="2026-04-24T23:52:28.110421356Z" level=info msg="Loading containers: start." Apr 24 23:52:28.241999 kernel: Initializing XFRM netlink socket Apr 24 23:52:28.316395 systemd-networkd[1402]: docker0: Link UP Apr 24 23:52:28.348021 dockerd[1667]: time="2026-04-24T23:52:28.347635932Z" level=info msg="Loading containers: done." Apr 24 23:52:28.370736 dockerd[1667]: time="2026-04-24T23:52:28.370499077Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:52:28.371341 dockerd[1667]: time="2026-04-24T23:52:28.371211765Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:52:28.371514 dockerd[1667]: time="2026-04-24T23:52:28.371479324Z" level=info msg="Daemon has completed initialization" Apr 24 23:52:28.415740 dockerd[1667]: time="2026-04-24T23:52:28.415293735Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:52:28.416772 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:52:29.321673 containerd[1469]: time="2026-04-24T23:52:29.321390200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 24 23:52:30.175579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704681226.mount: Deactivated successfully. Apr 24 23:52:30.855713 containerd[1469]: time="2026-04-24T23:52:30.855420475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:30.858986 containerd[1469]: time="2026-04-24T23:52:30.858749035Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 24 23:52:30.860246 containerd[1469]: time="2026-04-24T23:52:30.860210948Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:30.868405 containerd[1469]: time="2026-04-24T23:52:30.868195170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:30.870447 containerd[1469]: time="2026-04-24T23:52:30.870282047Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.548647957s" Apr 24 23:52:30.870447 containerd[1469]: time="2026-04-24T23:52:30.870466087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 24 23:52:30.872669 containerd[1469]: time="2026-04-24T23:52:30.872641359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 24 23:52:31.780843 containerd[1469]: time="2026-04-24T23:52:31.780544128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:31.782055 containerd[1469]: time="2026-04-24T23:52:31.781824719Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 24 23:52:31.783268 containerd[1469]: time="2026-04-24T23:52:31.783211377Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:31.786251 containerd[1469]: time="2026-04-24T23:52:31.786195095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:31.788864 containerd[1469]: time="2026-04-24T23:52:31.788678635Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 916.003064ms" Apr 24 23:52:31.788864 containerd[1469]: time="2026-04-24T23:52:31.788778228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 24 23:52:31.791813 containerd[1469]: time="2026-04-24T23:52:31.791605925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 24 23:52:32.492716 containerd[1469]: time="2026-04-24T23:52:32.492330477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:32.494223 containerd[1469]: time="2026-04-24T23:52:32.492811325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 24 23:52:32.494223 containerd[1469]: time="2026-04-24T23:52:32.494156216Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:32.497161 containerd[1469]: time="2026-04-24T23:52:32.497121200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:32.498572 containerd[1469]: time="2026-04-24T23:52:32.498531963Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 706.771204ms" Apr 24 23:52:32.498632 containerd[1469]: time="2026-04-24T23:52:32.498573705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 24 23:52:32.500209 containerd[1469]: time="2026-04-24T23:52:32.500172494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 24 23:52:33.342552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:52:33.350217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:52:34.549873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:34.553820 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:52:34.630812 kubelet[1893]: E0424 23:52:34.630376 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:52:34.633677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535379515.mount: Deactivated successfully. Apr 24 23:52:34.634886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:52:34.635042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:52:35.262358 containerd[1469]: time="2026-04-24T23:52:35.261959019Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 24 23:52:35.280391 containerd[1469]: time="2026-04-24T23:52:35.279991868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:35.282816 containerd[1469]: time="2026-04-24T23:52:35.282338001Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:35.285546 containerd[1469]: time="2026-04-24T23:52:35.285304759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:35.286269 containerd[1469]: time="2026-04-24T23:52:35.286220096Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 2.785981315s" Apr 24 23:52:35.286424 containerd[1469]: time="2026-04-24T23:52:35.286361781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 24 23:52:35.290287 containerd[1469]: time="2026-04-24T23:52:35.290255348Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 24 23:52:35.741553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824094966.mount: Deactivated successfully. Apr 24 23:52:37.219810 containerd[1469]: time="2026-04-24T23:52:37.219383218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:37.219810 containerd[1469]: time="2026-04-24T23:52:37.219587323Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 24 23:52:37.221905 containerd[1469]: time="2026-04-24T23:52:37.221842439Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:37.226477 containerd[1469]: time="2026-04-24T23:52:37.226278014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:37.228505 containerd[1469]: time="2026-04-24T23:52:37.228423462Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.93813677s" Apr 24 23:52:37.228505 containerd[1469]: time="2026-04-24T23:52:37.228487008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 24 23:52:37.230144 containerd[1469]: time="2026-04-24T23:52:37.230105276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 23:52:37.638417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488514545.mount: Deactivated successfully. Apr 24 23:52:37.645906 containerd[1469]: time="2026-04-24T23:52:37.645640931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:37.646479 containerd[1469]: time="2026-04-24T23:52:37.646239610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 24 23:52:37.647340 containerd[1469]: time="2026-04-24T23:52:37.647203397Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:37.649220 containerd[1469]: time="2026-04-24T23:52:37.649161106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:37.649889 containerd[1469]: time="2026-04-24T23:52:37.649849229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 419.709288ms" Apr 24 23:52:37.649981 containerd[1469]: time="2026-04-24T23:52:37.649886397Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 23:52:37.651098 containerd[1469]: time="2026-04-24T23:52:37.651073940Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 24 23:52:38.362681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260589023.mount: Deactivated successfully. Apr 24 23:52:39.325410 containerd[1469]: time="2026-04-24T23:52:39.325139766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:39.327263 containerd[1469]: time="2026-04-24T23:52:39.325340431Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 24 23:52:39.329339 containerd[1469]: time="2026-04-24T23:52:39.329268802Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:39.330171 containerd[1469]: time="2026-04-24T23:52:39.330132209Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.679029542s" Apr 24 23:52:39.330211 containerd[1469]: time="2026-04-24T23:52:39.330178357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 24 23:52:39.330795 containerd[1469]: time="2026-04-24T23:52:39.330738405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:40.570194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:40.589626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:52:40.625235 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit session-7.scope)... Apr 24 23:52:40.625301 systemd[1]: Reloading... Apr 24 23:52:40.818993 zram_generator::config[2099]: No configuration found. Apr 24 23:52:40.997734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:52:41.076067 systemd[1]: Reloading finished in 449 ms. Apr 24 23:52:41.141205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:52:41.141278 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:52:41.141587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:41.143107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:52:41.285709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:41.310030 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:52:41.500327 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:52:41.890796 kubelet[2147]: I0424 23:52:41.886693 2147 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 23:52:41.890796 kubelet[2147]: I0424 23:52:41.890849 2147 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:52:41.891315 kubelet[2147]: I0424 23:52:41.890910 2147 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:52:41.891315 kubelet[2147]: I0424 23:52:41.890916 2147 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:52:41.891315 kubelet[2147]: I0424 23:52:41.891225 2147 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 23:52:41.930651 kubelet[2147]: I0424 23:52:41.930389 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:52:41.931444 kubelet[2147]: E0424 23:52:41.931128 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:52:41.942140 kubelet[2147]: E0424 23:52:41.942016 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:52:41.942140 kubelet[2147]: I0424 23:52:41.942162 2147 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:52:41.952667 kubelet[2147]: I0424 23:52:41.952454 2147 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:52:41.954159 kubelet[2147]: I0424 23:52:41.953997 2147 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:52:41.954751 kubelet[2147]: I0424 23:52:41.954141 2147 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:52:41.955219 kubelet[2147]: I0424 23:52:41.954777 2147 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 23:52:41.955219 kubelet[2147]: I0424 23:52:41.954787 2147 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 23:52:41.955272 kubelet[2147]: I0424 23:52:41.955231 2147 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:52:41.960581 kubelet[2147]: I0424 23:52:41.960065 2147 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 23:52:41.961444 kubelet[2147]: I0424 23:52:41.961161 2147 kubelet.go:482] "Attempting to sync node with API server" Apr 24 23:52:41.961444 kubelet[2147]: I0424 23:52:41.961190 2147 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:52:41.961444 kubelet[2147]: I0424 23:52:41.961285 2147 kubelet.go:394] "Adding apiserver pod source" Apr 24 23:52:41.961444 kubelet[2147]: I0424 23:52:41.961348 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:52:41.969031 kubelet[2147]: I0424 23:52:41.968804 2147 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:52:41.975481 kubelet[2147]: I0424 23:52:41.975194 2147 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:52:41.975481 kubelet[2147]: I0424 23:52:41.975349 2147 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:52:41.976121 kubelet[2147]: W0424 23:52:41.975619 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:52:41.981260 kubelet[2147]: I0424 23:52:41.981225 2147 server.go:1257] "Started kubelet" Apr 24 23:52:41.985100 kubelet[2147]: I0424 23:52:41.982512 2147 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:52:41.985100 kubelet[2147]: I0424 23:52:41.982682 2147 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:52:41.985100 kubelet[2147]: I0424 23:52:41.983215 2147 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:52:41.985100 kubelet[2147]: I0424 23:52:41.983321 2147 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:52:41.985100 kubelet[2147]: I0424 23:52:41.983710 2147 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 23:52:41.987251 kubelet[2147]: I0424 23:52:41.985291 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:52:41.993986 kubelet[2147]: I0424 23:52:41.993764 2147 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:52:41.998753 kubelet[2147]: E0424 23:52:41.993572 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a97014a733297b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:52:41.981176187 +0000 UTC m=+0.535197410,LastTimestamp:2026-04-24 23:52:41.981176187 +0000 UTC m=+0.535197410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:52:42.003343 kubelet[2147]: I0424 23:52:42.001383 2147 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 23:52:42.003343 kubelet[2147]: I0424 23:52:42.003265 2147 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:52:42.003343 kubelet[2147]: I0424 23:52:42.003411 2147 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:52:42.005224 kubelet[2147]: E0424 23:52:42.004179 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:42.005224 kubelet[2147]: E0424 23:52:42.004979 2147 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Apr 24 23:52:42.005967 kubelet[2147]: I0424 23:52:42.005294 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:52:42.011293 kubelet[2147]: I0424 23:52:42.010953 2147 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:52:42.011293 kubelet[2147]: I0424 23:52:42.011167 2147 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:52:42.012050 kubelet[2147]: E0424 23:52:42.011794 2147 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:52:42.044004 kubelet[2147]: I0424 23:52:42.043813 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:52:42.045876 kubelet[2147]: I0424 23:52:42.045820 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:52:42.045978 kubelet[2147]: I0424 23:52:42.045898 2147 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 23:52:42.046842 kubelet[2147]: I0424 23:52:42.046213 2147 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 23:52:42.046842 kubelet[2147]: E0424 23:52:42.046604 2147 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:52:42.046842 kubelet[2147]: I0424 23:52:42.046624 2147 cpu_manager.go:225] "Starting" policy="none" Apr 24 23:52:42.046842 kubelet[2147]: I0424 23:52:42.046633 2147 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 23:52:42.046842 kubelet[2147]: I0424 23:52:42.046659 2147 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 23:52:42.055048 kubelet[2147]: I0424 23:52:42.054978 2147 policy_none.go:50] "Start" Apr 24 23:52:42.055048 kubelet[2147]: I0424 23:52:42.055049 2147 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:52:42.055048 kubelet[2147]: I0424 23:52:42.055079 2147 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:52:42.057093 kubelet[2147]: I0424 23:52:42.057072 2147 policy_none.go:44] "Start" Apr 24 23:52:42.067895 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:52:42.088062 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:52:42.093622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:52:42.105411 kubelet[2147]: E0424 23:52:42.105256 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:42.105411 kubelet[2147]: E0424 23:52:42.105419 2147 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:52:42.106374 kubelet[2147]: I0424 23:52:42.106329 2147 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 23:52:42.106574 kubelet[2147]: I0424 23:52:42.106378 2147 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:52:42.107212 kubelet[2147]: I0424 23:52:42.107181 2147 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 23:52:42.108812 kubelet[2147]: E0424 23:52:42.108767 2147 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:52:42.108853 kubelet[2147]: E0424 23:52:42.108843 2147 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 23:52:42.183638 systemd[1]: Created slice kubepods-burstable-pod76ece98090b0de32600710bc18e2048a.slice - libcontainer container kubepods-burstable-pod76ece98090b0de32600710bc18e2048a.slice. Apr 24 23:52:42.203855 kubelet[2147]: E0424 23:52:42.203719 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:42.206291 kubelet[2147]: E0424 23:52:42.206174 2147 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Apr 24 23:52:42.208423 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 24 23:52:42.208921 kubelet[2147]: I0424 23:52:42.208896 2147 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:52:42.209360 kubelet[2147]: E0424 23:52:42.209242 2147 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 24 23:52:42.210526 kubelet[2147]: E0424 23:52:42.210473 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:42.212562 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 24 23:52:42.213979 kubelet[2147]: E0424 23:52:42.213961 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:42.305721 kubelet[2147]: I0424 23:52:42.305424 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76ece98090b0de32600710bc18e2048a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ece98090b0de32600710bc18e2048a\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:42.305721 kubelet[2147]: I0424 23:52:42.305639 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76ece98090b0de32600710bc18e2048a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76ece98090b0de32600710bc18e2048a\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:42.305721 kubelet[2147]: I0424 23:52:42.305677 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:42.305721 kubelet[2147]: I0424 23:52:42.305697 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:42.305721 kubelet[2147]: I0424 23:52:42.305771 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:42.307548 kubelet[2147]: I0424 23:52:42.305797 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:42.307548 kubelet[2147]: I0424 23:52:42.305847 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:42.307548 kubelet[2147]: I0424 23:52:42.305867 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:42.307548 kubelet[2147]: I0424 23:52:42.307418 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76ece98090b0de32600710bc18e2048a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ece98090b0de32600710bc18e2048a\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:42.462000 kubelet[2147]: I0424 23:52:42.460813 2147 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:52:42.469473 kubelet[2147]: E0424 23:52:42.466492 2147 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 24 23:52:42.511630 kubelet[2147]: E0424 23:52:42.511137 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:42.514735 kubelet[2147]: E0424 23:52:42.514695 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:42.515264 containerd[1469]: time="2026-04-24T23:52:42.515132649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76ece98090b0de32600710bc18e2048a,Namespace:kube-system,Attempt:0,}" Apr 24 23:52:42.515872 containerd[1469]: time="2026-04-24T23:52:42.515462915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 24 23:52:42.518920 kubelet[2147]: E0424 23:52:42.518578 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:42.520716 containerd[1469]: time="2026-04-24T23:52:42.520626381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 24 23:52:42.609544 kubelet[2147]: E0424 23:52:42.609274 2147 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Apr 24 23:52:42.875823 kubelet[2147]: I0424 23:52:42.875246 2147 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:52:42.876353 kubelet[2147]: E0424 23:52:42.876317 2147 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 24 23:52:42.920003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423637475.mount: Deactivated successfully. Apr 24 23:52:42.931947 containerd[1469]: time="2026-04-24T23:52:42.931603775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:52:42.932796 containerd[1469]: time="2026-04-24T23:52:42.932356648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:52:42.933454 containerd[1469]: time="2026-04-24T23:52:42.933410831Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:52:42.934032 containerd[1469]: time="2026-04-24T23:52:42.934010715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:52:42.934546 containerd[1469]: time="2026-04-24T23:52:42.934490946Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:52:42.935031 containerd[1469]: time="2026-04-24T23:52:42.934999765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:52:42.935742 containerd[1469]: time="2026-04-24T23:52:42.935683700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 24 23:52:42.936871 containerd[1469]: time="2026-04-24T23:52:42.936811789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:52:42.938516 containerd[1469]: time="2026-04-24T23:52:42.938468781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.930696ms" Apr 24 23:52:42.938972 containerd[1469]: time="2026-04-24T23:52:42.938911268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 418.077398ms" Apr 24 23:52:42.941763 containerd[1469]: time="2026-04-24T23:52:42.941189141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.834974ms" Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172848770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172909651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172918258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.173010317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172137341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172205189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172213612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172123609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172191267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172199605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:43.174045 containerd[1469]: time="2026-04-24T23:52:43.172282668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:43.174640 containerd[1469]: time="2026-04-24T23:52:43.174053660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:43.361444 systemd[1]: Started cri-containerd-14b2cd8ca9f61d7765a5eed0d1ba8fe2c64fceef975e94a7373b7b7117e04bf4.scope - libcontainer container 14b2cd8ca9f61d7765a5eed0d1ba8fe2c64fceef975e94a7373b7b7117e04bf4. Apr 24 23:52:43.369093 systemd[1]: Started cri-containerd-c9cb3501355e4ff908c97951702c09abc69859efd5e230c203eba851a7139023.scope - libcontainer container c9cb3501355e4ff908c97951702c09abc69859efd5e230c203eba851a7139023. Apr 24 23:52:43.371398 systemd[1]: Started cri-containerd-fcc4089f9dcb6e0bc5a0b34707d444dad8ad2a36d1de21c844e1037f890392e7.scope - libcontainer container fcc4089f9dcb6e0bc5a0b34707d444dad8ad2a36d1de21c844e1037f890392e7. Apr 24 23:52:43.417854 kubelet[2147]: E0424 23:52:43.417632 2147 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Apr 24 23:52:43.522314 containerd[1469]: time="2026-04-24T23:52:43.522064629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76ece98090b0de32600710bc18e2048a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc4089f9dcb6e0bc5a0b34707d444dad8ad2a36d1de21c844e1037f890392e7\"" Apr 24 23:52:43.524275 containerd[1469]: time="2026-04-24T23:52:43.523695394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"14b2cd8ca9f61d7765a5eed0d1ba8fe2c64fceef975e94a7373b7b7117e04bf4\"" Apr 24 23:52:43.531854 kubelet[2147]: E0424 23:52:43.531615 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:43.533071 kubelet[2147]: E0424 23:52:43.532639 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:43.539042 containerd[1469]: time="2026-04-24T23:52:43.537400110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9cb3501355e4ff908c97951702c09abc69859efd5e230c203eba851a7139023\"" Apr 24 23:52:43.539589 kubelet[2147]: E0424 23:52:43.538375 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:43.553245 containerd[1469]: time="2026-04-24T23:52:43.552682052Z" level=info msg="CreateContainer within sandbox \"14b2cd8ca9f61d7765a5eed0d1ba8fe2c64fceef975e94a7373b7b7117e04bf4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:52:43.556776 containerd[1469]: time="2026-04-24T23:52:43.556555832Z" level=info msg="CreateContainer within sandbox \"fcc4089f9dcb6e0bc5a0b34707d444dad8ad2a36d1de21c844e1037f890392e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:52:43.556871 containerd[1469]: time="2026-04-24T23:52:43.556585768Z" level=info msg="CreateContainer within sandbox \"c9cb3501355e4ff908c97951702c09abc69859efd5e230c203eba851a7139023\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:52:43.678497 containerd[1469]: time="2026-04-24T23:52:43.678163003Z" level=info msg="CreateContainer within sandbox \"c9cb3501355e4ff908c97951702c09abc69859efd5e230c203eba851a7139023\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58bcd901db58a4ec2a7f54931ac1a6feb5bbfb877fb9d5f1a473b063d012cac2\"" Apr 24 23:52:43.682289 containerd[1469]: time="2026-04-24T23:52:43.682207475Z" level=info msg="StartContainer for \"58bcd901db58a4ec2a7f54931ac1a6feb5bbfb877fb9d5f1a473b063d012cac2\"" Apr 24 23:52:43.682665 kubelet[2147]: I0424 23:52:43.682607 2147 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:52:43.683214 containerd[1469]: time="2026-04-24T23:52:43.682797929Z" level=info msg="CreateContainer within sandbox \"14b2cd8ca9f61d7765a5eed0d1ba8fe2c64fceef975e94a7373b7b7117e04bf4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5b82226986bbfe9d928aa5e777c81dd70fbde3aedf4d9c8b1e7f503d428795f\"" Apr 24 23:52:43.684966 containerd[1469]: time="2026-04-24T23:52:43.684380631Z" level=info msg="CreateContainer within sandbox \"fcc4089f9dcb6e0bc5a0b34707d444dad8ad2a36d1de21c844e1037f890392e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5388c846366211de023e8d1f45ca181fbe56d714d579a7d7285a882b779ef30f\"" Apr 24 23:52:43.685253 kubelet[2147]: E0424 23:52:43.685131 2147 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 24 23:52:43.687440 containerd[1469]: time="2026-04-24T23:52:43.686030759Z" level=info msg="StartContainer for \"5388c846366211de023e8d1f45ca181fbe56d714d579a7d7285a882b779ef30f\"" Apr 24 23:52:43.687440 containerd[1469]: time="2026-04-24T23:52:43.686191725Z" level=info msg="StartContainer for \"c5b82226986bbfe9d928aa5e777c81dd70fbde3aedf4d9c8b1e7f503d428795f\"" Apr 24 23:52:43.730109 systemd[1]: Started cri-containerd-5388c846366211de023e8d1f45ca181fbe56d714d579a7d7285a882b779ef30f.scope - libcontainer container 5388c846366211de023e8d1f45ca181fbe56d714d579a7d7285a882b779ef30f. Apr 24 23:52:43.733322 systemd[1]: Started cri-containerd-58bcd901db58a4ec2a7f54931ac1a6feb5bbfb877fb9d5f1a473b063d012cac2.scope - libcontainer container 58bcd901db58a4ec2a7f54931ac1a6feb5bbfb877fb9d5f1a473b063d012cac2. Apr 24 23:52:43.734707 systemd[1]: Started cri-containerd-c5b82226986bbfe9d928aa5e777c81dd70fbde3aedf4d9c8b1e7f503d428795f.scope - libcontainer container c5b82226986bbfe9d928aa5e777c81dd70fbde3aedf4d9c8b1e7f503d428795f. Apr 24 23:52:44.012326 kubelet[2147]: E0424 23:52:44.012012 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a97014a733297b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:52:41.981176187 +0000 UTC m=+0.535197410,LastTimestamp:2026-04-24 23:52:41.981176187 +0000 UTC m=+0.535197410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:52:44.017775 containerd[1469]: time="2026-04-24T23:52:44.017538418Z" level=info msg="StartContainer for \"5388c846366211de023e8d1f45ca181fbe56d714d579a7d7285a882b779ef30f\" returns successfully" Apr 24 23:52:44.030241 containerd[1469]: time="2026-04-24T23:52:44.029829026Z" level=info msg="StartContainer for \"c5b82226986bbfe9d928aa5e777c81dd70fbde3aedf4d9c8b1e7f503d428795f\" returns successfully" Apr 24 23:52:44.045639 containerd[1469]: time="2026-04-24T23:52:44.045532724Z" level=info msg="StartContainer for \"58bcd901db58a4ec2a7f54931ac1a6feb5bbfb877fb9d5f1a473b063d012cac2\" returns successfully" Apr 24 23:52:44.074723 kubelet[2147]: E0424 23:52:44.074415 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:44.082436 kubelet[2147]: E0424 23:52:44.081655 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:44.091908 kubelet[2147]: E0424 23:52:44.091746 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:44.092347 kubelet[2147]: E0424 23:52:44.092137 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:44.093397 kubelet[2147]: E0424 23:52:44.093364 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:44.093663 kubelet[2147]: E0424 23:52:44.093636 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:45.103332 kubelet[2147]: E0424 23:52:45.103225 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:45.104618 kubelet[2147]: E0424 23:52:45.103606 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:45.105256 kubelet[2147]: E0424 23:52:45.105210 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:45.105346 kubelet[2147]: E0424 23:52:45.105302 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:45.105839 kubelet[2147]: E0424 23:52:45.105414 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:45.105839 kubelet[2147]: E0424 23:52:45.105482 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:45.291321 kubelet[2147]: I0424 23:52:45.291216 2147 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:52:45.924886 kubelet[2147]: E0424 23:52:45.924677 2147 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 23:52:45.966898 kubelet[2147]: I0424 23:52:45.966776 2147 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 24 23:52:45.966898 kubelet[2147]: E0424 23:52:45.966865 2147 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 24 23:52:45.985411 kubelet[2147]: E0424 23:52:45.985257 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:46.086913 kubelet[2147]: E0424 23:52:46.086601 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:46.108913 kubelet[2147]: E0424 23:52:46.108812 2147 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:52:46.110058 kubelet[2147]: E0424 23:52:46.109148 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:46.188469 kubelet[2147]: E0424 23:52:46.187355 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:46.290915 kubelet[2147]: E0424 23:52:46.290084 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:46.391717 kubelet[2147]: E0424 23:52:46.391343 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:46.493321 kubelet[2147]: E0424 23:52:46.492407 2147 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:52:46.602379 kubelet[2147]: I0424 23:52:46.602298 2147 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:46.610652 kubelet[2147]: E0424 23:52:46.610594 2147 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:46.610652 kubelet[2147]: I0424 23:52:46.610622 2147 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:46.612259 kubelet[2147]: E0424 23:52:46.612211 2147 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:46.612259 kubelet[2147]: I0424 23:52:46.612234 2147 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:46.613691 kubelet[2147]: E0424 23:52:46.613667 2147 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:47.011303 kubelet[2147]: I0424 23:52:47.011032 2147 apiserver.go:52] "Watching apiserver" Apr 24 23:52:47.106679 kubelet[2147]: I0424 23:52:47.104703 2147 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:52:48.329892 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)... Apr 24 23:52:48.329914 systemd[1]: Reloading... Apr 24 23:52:48.454204 zram_generator::config[2473]: No configuration found. Apr 24 23:52:48.636733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:52:48.700881 systemd[1]: Reloading finished in 370 ms. Apr 24 23:52:48.762366 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:52:48.776836 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:52:48.777289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:48.777364 systemd[1]: kubelet.service: Consumed 1.991s CPU time, 130.2M memory peak, 0B memory swap peak. Apr 24 23:52:48.785099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:52:48.921335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:52:48.929362 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:52:48.993577 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:52:49.007409 kubelet[2518]: I0424 23:52:49.007304 2518 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 23:52:49.007409 kubelet[2518]: I0424 23:52:49.007367 2518 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:52:49.007409 kubelet[2518]: I0424 23:52:49.007396 2518 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:52:49.007409 kubelet[2518]: I0424 23:52:49.007401 2518 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:52:49.008024 kubelet[2518]: I0424 23:52:49.007599 2518 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 23:52:49.009193 kubelet[2518]: I0424 23:52:49.009154 2518 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:52:49.011387 kubelet[2518]: I0424 23:52:49.011337 2518 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:52:49.029432 kubelet[2518]: E0424 23:52:49.029260 2518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:52:49.029432 kubelet[2518]: I0424 23:52:49.029473 2518 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:52:49.053354 kubelet[2518]: I0424 23:52:49.044129 2518 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:52:49.053354 kubelet[2518]: I0424 23:52:49.044525 2518 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:52:49.053354 kubelet[2518]: I0424 23:52:49.044592 2518 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:52:49.053354 kubelet[2518]: I0424 23:52:49.044864 2518 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.044871 2518 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.044900 2518 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.045367 2518 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.045580 2518 kubelet.go:482] "Attempting to sync node with API server" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.045594 2518 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.045616 2518 kubelet.go:394] "Adding apiserver pod source" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.045627 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:52:49.097530 kubelet[2518]: I0424 23:52:49.052860 2518 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:52:49.153649 kubelet[2518]: I0424 23:52:49.142228 2518 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:52:49.153649 kubelet[2518]: I0424 23:52:49.142403 2518 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:52:49.153649 kubelet[2518]: I0424 23:52:49.152645 2518 server.go:1257] "Started kubelet" Apr 24 23:52:49.153899 kubelet[2518]: I0424 23:52:49.153740 2518 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:52:49.154027 kubelet[2518]: I0424 23:52:49.153986 2518 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:52:49.154353 kubelet[2518]: I0424 23:52:49.154251 2518 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:52:49.156143 kubelet[2518]: I0424 23:52:49.156127 2518 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:52:49.160533 kubelet[2518]: I0424 23:52:49.157829 2518 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:52:49.165007 kubelet[2518]: I0424 23:52:49.164878 2518 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 23:52:49.169087 kubelet[2518]: I0424 23:52:49.167965 2518 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 23:52:49.169087 kubelet[2518]: I0424 23:52:49.166678 2518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:52:49.169087 kubelet[2518]: I0424 23:52:49.168189 2518 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:52:49.169087 kubelet[2518]: I0424 23:52:49.168611 2518 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:52:49.181449 kubelet[2518]: E0424 23:52:49.179487 2518 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:52:49.190909 kubelet[2518]: I0424 23:52:49.190763 2518 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:52:49.190909 kubelet[2518]: I0424 23:52:49.190807 2518 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:52:49.192356 kubelet[2518]: I0424 23:52:49.191040 2518 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:52:49.205272 kubelet[2518]: I0424 23:52:49.205098 2518 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:52:49.208701 kubelet[2518]: I0424 23:52:49.208654 2518 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:52:49.209242 kubelet[2518]: I0424 23:52:49.209228 2518 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 23:52:49.215143 kubelet[2518]: I0424 23:52:49.209440 2518 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 23:52:49.219428 kubelet[2518]: E0424 23:52:49.217688 2518 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.241783 2518 cpu_manager.go:225] "Starting" policy="none" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.241798 2518 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.241818 2518 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.241995 2518 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.242007 2518 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.242021 2518 policy_none.go:50] "Start" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.242032 2518 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.242067 2518 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.242143 2518 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 23:52:49.242998 kubelet[2518]: I0424 23:52:49.242160 2518 policy_none.go:44] "Start" Apr 24 23:52:49.254170 kubelet[2518]: E0424 23:52:49.254091 2518 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:52:49.254975 kubelet[2518]: I0424 23:52:49.254962 2518 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 23:52:49.255184 kubelet[2518]: I0424 23:52:49.255082 2518 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:52:49.255483 kubelet[2518]: I0424 23:52:49.255454 2518 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 23:52:49.258945 kubelet[2518]: E0424 23:52:49.258753 2518 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:52:49.322200 kubelet[2518]: I0424 23:52:49.322090 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:49.323072 kubelet[2518]: I0424 23:52:49.322110 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:49.334057 kubelet[2518]: I0424 23:52:49.333658 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:49.437329 kubelet[2518]: I0424 23:52:49.436673 2518 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:52:49.477014 kubelet[2518]: I0424 23:52:49.476793 2518 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 24 23:52:49.477014 kubelet[2518]: I0424 23:52:49.477082 2518 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 24 23:52:49.488076 kubelet[2518]: I0424 23:52:49.487843 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:49.488791 kubelet[2518]: I0424 23:52:49.488686 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:49.488791 kubelet[2518]: I0424 23:52:49.488711 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:49.488791 kubelet[2518]: I0424 23:52:49.488741 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:49.488791 kubelet[2518]: I0424 23:52:49.488755 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76ece98090b0de32600710bc18e2048a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ece98090b0de32600710bc18e2048a\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:49.489389 kubelet[2518]: I0424 23:52:49.488803 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76ece98090b0de32600710bc18e2048a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ece98090b0de32600710bc18e2048a\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:49.489389 kubelet[2518]: I0424 23:52:49.488857 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:49.489389 kubelet[2518]: I0424 23:52:49.488888 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:49.489389 kubelet[2518]: I0424 23:52:49.488951 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76ece98090b0de32600710bc18e2048a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76ece98090b0de32600710bc18e2048a\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:49.645837 kubelet[2518]: E0424 23:52:49.645186 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:49.645837 kubelet[2518]: E0424 23:52:49.645644 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:49.736590 kubelet[2518]: E0424 23:52:49.734983 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:50.049452 kubelet[2518]: I0424 23:52:50.048686 2518 apiserver.go:52] "Watching apiserver" Apr 24 23:52:50.069204 kubelet[2518]: I0424 23:52:50.068849 2518 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:52:50.247605 kubelet[2518]: I0424 23:52:50.241905 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:50.247605 kubelet[2518]: I0424 23:52:50.242480 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:50.247605 kubelet[2518]: I0424 23:52:50.242450 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:50.263870 kubelet[2518]: E0424 23:52:50.262235 2518 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 23:52:50.263870 kubelet[2518]: E0424 23:52:50.262870 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:50.269678 kubelet[2518]: E0424 23:52:50.269314 2518 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:52:50.271394 kubelet[2518]: E0424 23:52:50.271305 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:50.275492 kubelet[2518]: E0424 23:52:50.275326 2518 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 23:52:50.276557 kubelet[2518]: E0424 23:52:50.276306 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:50.333576 kubelet[2518]: I0424 23:52:50.331903 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3318491940000001 podStartE2EDuration="1.331849194s" podCreationTimestamp="2026-04-24 23:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:52:50.307274096 +0000 UTC m=+1.371397406" watchObservedRunningTime="2026-04-24 23:52:50.331849194 +0000 UTC m=+1.395972523" Apr 24 23:52:50.333576 kubelet[2518]: I0424 23:52:50.332324 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.332313952 podStartE2EDuration="1.332313952s" podCreationTimestamp="2026-04-24 23:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:52:50.331345474 +0000 UTC m=+1.395468792" watchObservedRunningTime="2026-04-24 23:52:50.332313952 +0000 UTC m=+1.396437284" Apr 24 23:52:50.362914 kubelet[2518]: I0424 23:52:50.362335 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3622198669999999 podStartE2EDuration="1.362219867s" podCreationTimestamp="2026-04-24 23:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:52:50.361967297 +0000 UTC m=+1.426090608" watchObservedRunningTime="2026-04-24 23:52:50.362219867 +0000 UTC m=+1.426343190" Apr 24 23:52:51.244257 kubelet[2518]: E0424 23:52:51.244194 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:51.245244 kubelet[2518]: E0424 23:52:51.244831 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:51.245244 kubelet[2518]: E0424 23:52:51.245083 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:52.249564 kubelet[2518]: E0424 23:52:52.249366 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:53.188788 kubelet[2518]: I0424 23:52:53.188645 2518 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:52:53.190239 containerd[1469]: time="2026-04-24T23:52:53.190175030Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:52:53.191571 kubelet[2518]: I0424 23:52:53.190483 2518 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:52:53.257549 kubelet[2518]: E0424 23:52:53.257320 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:53.372117 kubelet[2518]: E0424 23:52:53.371912 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:54.158215 systemd[1]: Created slice kubepods-besteffort-podafe81bda_60b3_4221_ba78_6449c554f36a.slice - libcontainer container kubepods-besteffort-podafe81bda_60b3_4221_ba78_6449c554f36a.slice. Apr 24 23:52:54.263469 kubelet[2518]: E0424 23:52:54.262671 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:54.366324 kubelet[2518]: I0424 23:52:54.366118 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afe81bda-60b3-4221-ba78-6449c554f36a-kube-proxy\") pod \"kube-proxy-4mw6z\" (UID: \"afe81bda-60b3-4221-ba78-6449c554f36a\") " pod="kube-system/kube-proxy-4mw6z" Apr 24 23:52:54.366324 kubelet[2518]: I0424 23:52:54.366243 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe81bda-60b3-4221-ba78-6449c554f36a-lib-modules\") pod \"kube-proxy-4mw6z\" (UID: \"afe81bda-60b3-4221-ba78-6449c554f36a\") " pod="kube-system/kube-proxy-4mw6z" Apr 24 23:52:54.366324 kubelet[2518]: I0424 23:52:54.366279 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe81bda-60b3-4221-ba78-6449c554f36a-xtables-lock\") pod \"kube-proxy-4mw6z\" (UID: \"afe81bda-60b3-4221-ba78-6449c554f36a\") " pod="kube-system/kube-proxy-4mw6z" Apr 24 23:52:54.366324 kubelet[2518]: I0424 23:52:54.366341 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbc9\" (UniqueName: \"kubernetes.io/projected/afe81bda-60b3-4221-ba78-6449c554f36a-kube-api-access-lsbc9\") pod \"kube-proxy-4mw6z\" (UID: \"afe81bda-60b3-4221-ba78-6449c554f36a\") " pod="kube-system/kube-proxy-4mw6z" Apr 24 23:52:54.436216 systemd[1]: Created slice kubepods-besteffort-pod56c413be_84cc_471f_8408_d0c29df50c4d.slice - libcontainer container kubepods-besteffort-pod56c413be_84cc_471f_8408_d0c29df50c4d.slice. Apr 24 23:52:54.568867 kubelet[2518]: I0424 23:52:54.568570 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqc52\" (UniqueName: \"kubernetes.io/projected/56c413be-84cc-471f-8408-d0c29df50c4d-kube-api-access-nqc52\") pod \"tigera-operator-6cf4cccc57-25g2c\" (UID: \"56c413be-84cc-471f-8408-d0c29df50c4d\") " pod="tigera-operator/tigera-operator-6cf4cccc57-25g2c" Apr 24 23:52:54.568867 kubelet[2518]: I0424 23:52:54.568787 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56c413be-84cc-471f-8408-d0c29df50c4d-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-25g2c\" (UID: \"56c413be-84cc-471f-8408-d0c29df50c4d\") " pod="tigera-operator/tigera-operator-6cf4cccc57-25g2c" Apr 24 23:52:54.792137 kubelet[2518]: E0424 23:52:54.791803 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:54.792464 containerd[1469]: time="2026-04-24T23:52:54.791857351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-25g2c,Uid:56c413be-84cc-471f-8408-d0c29df50c4d,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:52:54.794410 containerd[1469]: time="2026-04-24T23:52:54.794355985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mw6z,Uid:afe81bda-60b3-4221-ba78-6449c554f36a,Namespace:kube-system,Attempt:0,}" Apr 24 23:52:54.826234 containerd[1469]: time="2026-04-24T23:52:54.825024831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:52:54.826234 containerd[1469]: time="2026-04-24T23:52:54.825076999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:52:54.826234 containerd[1469]: time="2026-04-24T23:52:54.825089037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:54.826234 containerd[1469]: time="2026-04-24T23:52:54.825163433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:54.842367 containerd[1469]: time="2026-04-24T23:52:54.841391976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:52:54.842367 containerd[1469]: time="2026-04-24T23:52:54.841434359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:52:54.842367 containerd[1469]: time="2026-04-24T23:52:54.841524162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:54.842367 containerd[1469]: time="2026-04-24T23:52:54.842190942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:52:54.860574 systemd[1]: Started cri-containerd-c1f182167a666e50348615d25d9c254bb8e4247dd1a1a21e05077acc05e177ab.scope - libcontainer container c1f182167a666e50348615d25d9c254bb8e4247dd1a1a21e05077acc05e177ab. Apr 24 23:52:54.904337 systemd[1]: Started cri-containerd-cc24fc72877b61e14f71ea324927e385aeca9ff99002c368e17911f34644218f.scope - libcontainer container cc24fc72877b61e14f71ea324927e385aeca9ff99002c368e17911f34644218f. Apr 24 23:52:55.089831 containerd[1469]: time="2026-04-24T23:52:55.088468546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-25g2c,Uid:56c413be-84cc-471f-8408-d0c29df50c4d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c1f182167a666e50348615d25d9c254bb8e4247dd1a1a21e05077acc05e177ab\"" Apr 24 23:52:55.095063 containerd[1469]: time="2026-04-24T23:52:55.094762219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mw6z,Uid:afe81bda-60b3-4221-ba78-6449c554f36a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc24fc72877b61e14f71ea324927e385aeca9ff99002c368e17911f34644218f\"" Apr 24 23:52:55.095858 containerd[1469]: time="2026-04-24T23:52:55.095816708Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:52:55.096451 kubelet[2518]: E0424 23:52:55.096398 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:55.104193 containerd[1469]: time="2026-04-24T23:52:55.104089380Z" level=info msg="CreateContainer within sandbox \"cc24fc72877b61e14f71ea324927e385aeca9ff99002c368e17911f34644218f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:52:55.127417 containerd[1469]: time="2026-04-24T23:52:55.127263845Z" level=info msg="CreateContainer within sandbox \"cc24fc72877b61e14f71ea324927e385aeca9ff99002c368e17911f34644218f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14849adc86b507923fea9d8c7317e0ca7f6e69d793280adb906248bf4a49f454\"" Apr 24 23:52:55.129095 containerd[1469]: time="2026-04-24T23:52:55.129053778Z" level=info msg="StartContainer for \"14849adc86b507923fea9d8c7317e0ca7f6e69d793280adb906248bf4a49f454\"" Apr 24 23:52:55.175148 systemd[1]: Started cri-containerd-14849adc86b507923fea9d8c7317e0ca7f6e69d793280adb906248bf4a49f454.scope - libcontainer container 14849adc86b507923fea9d8c7317e0ca7f6e69d793280adb906248bf4a49f454. Apr 24 23:52:55.209356 containerd[1469]: time="2026-04-24T23:52:55.209283823Z" level=info msg="StartContainer for \"14849adc86b507923fea9d8c7317e0ca7f6e69d793280adb906248bf4a49f454\" returns successfully" Apr 24 23:52:55.272131 kubelet[2518]: E0424 23:52:55.271906 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:55.289915 kubelet[2518]: I0424 23:52:55.289737 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-4mw6z" podStartSLOduration=1.289669347 podStartE2EDuration="1.289669347s" podCreationTimestamp="2026-04-24 23:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:52:55.289300458 +0000 UTC m=+6.353423767" watchObservedRunningTime="2026-04-24 23:52:55.289669347 +0000 UTC m=+6.353792656" Apr 24 23:52:56.279446 kubelet[2518]: E0424 23:52:56.279354 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:52:56.409665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115098349.mount: Deactivated successfully. Apr 24 23:52:57.519579 containerd[1469]: time="2026-04-24T23:52:57.519307466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:57.521718 containerd[1469]: time="2026-04-24T23:52:57.519773372Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:52:57.521831 containerd[1469]: time="2026-04-24T23:52:57.521782493Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:57.523421 containerd[1469]: time="2026-04-24T23:52:57.523349467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:52:57.523985 containerd[1469]: time="2026-04-24T23:52:57.523952195Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.428105346s" Apr 24 23:52:57.524021 containerd[1469]: time="2026-04-24T23:52:57.523989577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:52:57.530771 containerd[1469]: time="2026-04-24T23:52:57.530741806Z" level=info msg="CreateContainer within sandbox \"c1f182167a666e50348615d25d9c254bb8e4247dd1a1a21e05077acc05e177ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:52:57.553856 containerd[1469]: time="2026-04-24T23:52:57.553724798Z" level=info msg="CreateContainer within sandbox \"c1f182167a666e50348615d25d9c254bb8e4247dd1a1a21e05077acc05e177ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e2798d656918ab8e395662ed89abd72179c6242b2dd202c940be8735debdb74\"" Apr 24 23:52:57.556653 containerd[1469]: time="2026-04-24T23:52:57.555462237Z" level=info msg="StartContainer for \"1e2798d656918ab8e395662ed89abd72179c6242b2dd202c940be8735debdb74\"" Apr 24 23:52:57.614204 systemd[1]: Started cri-containerd-1e2798d656918ab8e395662ed89abd72179c6242b2dd202c940be8735debdb74.scope - libcontainer container 1e2798d656918ab8e395662ed89abd72179c6242b2dd202c940be8735debdb74. Apr 24 23:52:57.659654 containerd[1469]: time="2026-04-24T23:52:57.659409315Z" level=info msg="StartContainer for \"1e2798d656918ab8e395662ed89abd72179c6242b2dd202c940be8735debdb74\" returns successfully" Apr 24 23:52:58.294485 kubelet[2518]: I0424 23:52:58.294348 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-25g2c" podStartSLOduration=1.863544583 podStartE2EDuration="4.294333676s" podCreationTimestamp="2026-04-24 23:52:54 +0000 UTC" firstStartedPulling="2026-04-24 23:52:55.095453536 +0000 UTC m=+6.159576842" lastFinishedPulling="2026-04-24 23:52:57.526242629 +0000 UTC m=+8.590365935" observedRunningTime="2026-04-24 23:52:58.294146843 +0000 UTC m=+9.358270168" watchObservedRunningTime="2026-04-24 23:52:58.294333676 +0000 UTC m=+9.358457000" Apr 24 23:53:00.070231 kubelet[2518]: E0424 23:53:00.070108 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:03.002235 kubelet[2518]: E0424 23:53:03.001685 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:03.387041 kubelet[2518]: E0424 23:53:03.385854 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:03.676188 sudo[1649]: pam_unix(sudo:session): session closed for user root Apr 24 23:53:03.681969 sshd[1646]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:03.692208 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:44420.service: Deactivated successfully. Apr 24 23:53:03.713006 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:53:03.714022 systemd[1]: session-7.scope: Consumed 5.969s CPU time, 164.2M memory peak, 0B memory swap peak. Apr 24 23:53:03.715454 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:53:03.749741 systemd-logind[1458]: Removed session 7. Apr 24 23:53:05.208492 update_engine[1461]: I20260424 23:53:05.206591 1461 update_attempter.cc:509] Updating boot flags... Apr 24 23:53:05.288088 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2939) Apr 24 23:53:05.385961 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2941) Apr 24 23:53:05.982586 systemd[1]: Created slice kubepods-besteffort-poda0cb4993_7b75_4751_ab65_cb654e86c9f8.slice - libcontainer container kubepods-besteffort-poda0cb4993_7b75_4751_ab65_cb654e86c9f8.slice. Apr 24 23:53:06.012477 kubelet[2518]: I0424 23:53:06.012327 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a0cb4993-7b75-4751-ab65-cb654e86c9f8-typha-certs\") pod \"calico-typha-75cd6cb456-slln4\" (UID: \"a0cb4993-7b75-4751-ab65-cb654e86c9f8\") " pod="calico-system/calico-typha-75cd6cb456-slln4" Apr 24 23:53:06.012477 kubelet[2518]: I0424 23:53:06.012429 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4w8\" (UniqueName: \"kubernetes.io/projected/a0cb4993-7b75-4751-ab65-cb654e86c9f8-kube-api-access-xs4w8\") pod \"calico-typha-75cd6cb456-slln4\" (UID: \"a0cb4993-7b75-4751-ab65-cb654e86c9f8\") " pod="calico-system/calico-typha-75cd6cb456-slln4" Apr 24 23:53:06.012477 kubelet[2518]: I0424 23:53:06.012453 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0cb4993-7b75-4751-ab65-cb654e86c9f8-tigera-ca-bundle\") pod \"calico-typha-75cd6cb456-slln4\" (UID: \"a0cb4993-7b75-4751-ab65-cb654e86c9f8\") " pod="calico-system/calico-typha-75cd6cb456-slln4" Apr 24 23:53:06.031389 systemd[1]: Created slice kubepods-besteffort-poddd5c02b4_e2ed_4947_9d95_eff70746fb37.slice - libcontainer container kubepods-besteffort-poddd5c02b4_e2ed_4947_9d95_eff70746fb37.slice. Apr 24 23:53:06.155498 kubelet[2518]: E0424 23:53:06.155185 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:06.215227 kubelet[2518]: I0424 23:53:06.215074 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd5c02b4-e2ed-4947-9d95-eff70746fb37-tigera-ca-bundle\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.215227 kubelet[2518]: I0424 23:53:06.215178 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-bpffs\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.215227 kubelet[2518]: I0424 23:53:06.215196 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-cni-bin-dir\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.215227 kubelet[2518]: I0424 23:53:06.215207 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-var-lib-calico\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.215227 kubelet[2518]: I0424 23:53:06.215224 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkjs\" (UniqueName: \"kubernetes.io/projected/dd5c02b4-e2ed-4947-9d95-eff70746fb37-kube-api-access-2bkjs\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216121 kubelet[2518]: I0424 23:53:06.215246 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-var-run-calico\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216121 kubelet[2518]: I0424 23:53:06.215327 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-cni-net-dir\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216121 kubelet[2518]: I0424 23:53:06.215344 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-xtables-lock\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216121 kubelet[2518]: I0424 23:53:06.215358 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-nodeproc\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216121 kubelet[2518]: I0424 23:53:06.215369 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-policysync\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216265 kubelet[2518]: I0424 23:53:06.215382 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-sys-fs\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216265 kubelet[2518]: I0424 23:53:06.215401 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-cni-log-dir\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216265 kubelet[2518]: I0424 23:53:06.215412 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-flexvol-driver-host\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216265 kubelet[2518]: I0424 23:53:06.215423 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd5c02b4-e2ed-4947-9d95-eff70746fb37-lib-modules\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.216265 kubelet[2518]: I0424 23:53:06.215438 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd5c02b4-e2ed-4947-9d95-eff70746fb37-node-certs\") pod \"calico-node-rjlrv\" (UID: \"dd5c02b4-e2ed-4947-9d95-eff70746fb37\") " pod="calico-system/calico-node-rjlrv" Apr 24 23:53:06.295827 kubelet[2518]: E0424 23:53:06.294458 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:06.296192 containerd[1469]: time="2026-04-24T23:53:06.296092446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cd6cb456-slln4,Uid:a0cb4993-7b75-4751-ab65-cb654e86c9f8,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:06.326607 kubelet[2518]: I0424 23:53:06.319896 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98a32295-87c4-4c33-bd3a-7a5df06b2711-kubelet-dir\") pod \"csi-node-driver-x29xn\" (UID: \"98a32295-87c4-4c33-bd3a-7a5df06b2711\") " pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:06.329355 kubelet[2518]: E0424 23:53:06.328966 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.329355 kubelet[2518]: W0424 23:53:06.329048 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.329355 kubelet[2518]: E0424 23:53:06.329236 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.330195 kubelet[2518]: E0424 23:53:06.330025 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.330195 kubelet[2518]: W0424 23:53:06.330049 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.330195 kubelet[2518]: E0424 23:53:06.330077 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.333487 kubelet[2518]: E0424 23:53:06.332553 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.333487 kubelet[2518]: W0424 23:53:06.332879 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.333487 kubelet[2518]: E0424 23:53:06.333316 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.337507 kubelet[2518]: E0424 23:53:06.336952 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.337507 kubelet[2518]: W0424 23:53:06.337346 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.337507 kubelet[2518]: E0424 23:53:06.337403 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.339280 kubelet[2518]: E0424 23:53:06.338710 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.339280 kubelet[2518]: W0424 23:53:06.338725 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.339280 kubelet[2518]: E0424 23:53:06.338762 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.339280 kubelet[2518]: I0424 23:53:06.339237 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98a32295-87c4-4c33-bd3a-7a5df06b2711-registration-dir\") pod \"csi-node-driver-x29xn\" (UID: \"98a32295-87c4-4c33-bd3a-7a5df06b2711\") " pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:06.345080 kubelet[2518]: E0424 23:53:06.345019 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.345231 kubelet[2518]: W0424 23:53:06.345185 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.345338 kubelet[2518]: E0424 23:53:06.345326 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.345834 kubelet[2518]: E0424 23:53:06.345821 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.345983 kubelet[2518]: W0424 23:53:06.345971 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.346358 kubelet[2518]: E0424 23:53:06.346343 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.348481 kubelet[2518]: E0424 23:53:06.348468 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.348789 kubelet[2518]: W0424 23:53:06.348779 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.348921 kubelet[2518]: E0424 23:53:06.348836 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.349123 kubelet[2518]: E0424 23:53:06.349113 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.349177 kubelet[2518]: W0424 23:53:06.349169 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.349208 kubelet[2518]: E0424 23:53:06.349202 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.350817 kubelet[2518]: E0424 23:53:06.350803 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.350902 kubelet[2518]: W0424 23:53:06.350892 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.351010 kubelet[2518]: E0424 23:53:06.351001 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.354263 kubelet[2518]: E0424 23:53:06.353860 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.354263 kubelet[2518]: W0424 23:53:06.353873 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.354263 kubelet[2518]: E0424 23:53:06.353884 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.356243 kubelet[2518]: E0424 23:53:06.356226 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.356361 kubelet[2518]: W0424 23:53:06.356350 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.356528 kubelet[2518]: E0424 23:53:06.356470 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.356609 kubelet[2518]: I0424 23:53:06.356599 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/98a32295-87c4-4c33-bd3a-7a5df06b2711-varrun\") pod \"csi-node-driver-x29xn\" (UID: \"98a32295-87c4-4c33-bd3a-7a5df06b2711\") " pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:06.358356 kubelet[2518]: E0424 23:53:06.358342 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.358430 kubelet[2518]: W0424 23:53:06.358419 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.358483 kubelet[2518]: E0424 23:53:06.358474 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.393057 kubelet[2518]: E0424 23:53:06.392853 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.395166 kubelet[2518]: W0424 23:53:06.394558 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.396357 kubelet[2518]: E0424 23:53:06.396222 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.398180 kubelet[2518]: E0424 23:53:06.398137 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.398253 kubelet[2518]: W0424 23:53:06.398239 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.398307 kubelet[2518]: E0424 23:53:06.398300 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.398796 kubelet[2518]: E0424 23:53:06.398785 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.398854 kubelet[2518]: W0424 23:53:06.398847 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.398912 kubelet[2518]: E0424 23:53:06.398905 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.399197 kubelet[2518]: E0424 23:53:06.399188 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.399249 kubelet[2518]: W0424 23:53:06.399242 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.399282 kubelet[2518]: E0424 23:53:06.399276 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.399605 kubelet[2518]: E0424 23:53:06.399596 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.399650 kubelet[2518]: W0424 23:53:06.399644 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.399700 kubelet[2518]: E0424 23:53:06.399694 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.400137 kubelet[2518]: E0424 23:53:06.400096 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.400194 kubelet[2518]: W0424 23:53:06.400186 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.400239 kubelet[2518]: E0424 23:53:06.400233 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.400527 kubelet[2518]: E0424 23:53:06.400519 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.400579 kubelet[2518]: W0424 23:53:06.400571 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.400611 kubelet[2518]: E0424 23:53:06.400605 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.403235 kubelet[2518]: E0424 23:53:06.403122 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.404046 kubelet[2518]: W0424 23:53:06.404001 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.404293 kubelet[2518]: E0424 23:53:06.404234 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.405079 kubelet[2518]: E0424 23:53:06.405067 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.405239 kubelet[2518]: W0424 23:53:06.405164 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.405293 kubelet[2518]: E0424 23:53:06.405286 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.405761 kubelet[2518]: E0424 23:53:06.405751 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.405836 kubelet[2518]: W0424 23:53:06.405829 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.405967 kubelet[2518]: E0424 23:53:06.405957 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.406564 kubelet[2518]: E0424 23:53:06.406555 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.406716 kubelet[2518]: W0424 23:53:06.406706 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.406791 kubelet[2518]: E0424 23:53:06.406753 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.407101 kubelet[2518]: E0424 23:53:06.407091 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.407220 kubelet[2518]: W0424 23:53:06.407175 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.407307 kubelet[2518]: E0424 23:53:06.407193 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.407616 kubelet[2518]: E0424 23:53:06.407543 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.407616 kubelet[2518]: W0424 23:53:06.407552 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.407616 kubelet[2518]: E0424 23:53:06.407559 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.407896 kubelet[2518]: E0424 23:53:06.407888 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.407996 kubelet[2518]: W0424 23:53:06.407956 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.407996 kubelet[2518]: E0424 23:53:06.407968 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.408374 kubelet[2518]: E0424 23:53:06.408365 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.412025 kubelet[2518]: W0424 23:53:06.411886 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.412801 kubelet[2518]: E0424 23:53:06.412095 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.413503 kubelet[2518]: E0424 23:53:06.413459 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.413503 kubelet[2518]: W0424 23:53:06.413473 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.413503 kubelet[2518]: E0424 23:53:06.413483 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.414643 kubelet[2518]: E0424 23:53:06.414633 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.414735 kubelet[2518]: W0424 23:53:06.414726 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.414786 kubelet[2518]: E0424 23:53:06.414779 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.418316 kubelet[2518]: E0424 23:53:06.417871 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.419918 kubelet[2518]: W0424 23:53:06.419177 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.419918 kubelet[2518]: E0424 23:53:06.419412 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.420187 kubelet[2518]: E0424 23:53:06.420146 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.420187 kubelet[2518]: W0424 23:53:06.420156 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.420262 kubelet[2518]: E0424 23:53:06.420241 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.420918 kubelet[2518]: E0424 23:53:06.420905 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.421036 kubelet[2518]: W0424 23:53:06.421026 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.421176 kubelet[2518]: E0424 23:53:06.421086 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.421326 kubelet[2518]: I0424 23:53:06.421312 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m9q4\" (UniqueName: \"kubernetes.io/projected/98a32295-87c4-4c33-bd3a-7a5df06b2711-kube-api-access-7m9q4\") pod \"csi-node-driver-x29xn\" (UID: \"98a32295-87c4-4c33-bd3a-7a5df06b2711\") " pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:06.421444 kubelet[2518]: E0424 23:53:06.421438 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.421492 kubelet[2518]: W0424 23:53:06.421476 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.421492 kubelet[2518]: E0424 23:53:06.421486 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.421961 kubelet[2518]: E0424 23:53:06.421869 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.421961 kubelet[2518]: W0424 23:53:06.421883 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.421961 kubelet[2518]: E0424 23:53:06.421895 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.422560 kubelet[2518]: E0424 23:53:06.422422 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.422560 kubelet[2518]: W0424 23:53:06.422432 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.422560 kubelet[2518]: E0424 23:53:06.422447 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.422921 kubelet[2518]: E0424 23:53:06.422877 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.422921 kubelet[2518]: W0424 23:53:06.422887 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.422921 kubelet[2518]: E0424 23:53:06.422895 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.423303 kubelet[2518]: E0424 23:53:06.423197 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.423303 kubelet[2518]: W0424 23:53:06.423205 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.423303 kubelet[2518]: E0424 23:53:06.423218 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.423657 kubelet[2518]: E0424 23:53:06.423586 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.423657 kubelet[2518]: W0424 23:53:06.423597 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.423657 kubelet[2518]: E0424 23:53:06.423609 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.424305 kubelet[2518]: E0424 23:53:06.424188 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.424305 kubelet[2518]: W0424 23:53:06.424197 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.424305 kubelet[2518]: E0424 23:53:06.424206 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.424522 kubelet[2518]: E0424 23:53:06.424496 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.424522 kubelet[2518]: W0424 23:53:06.424503 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.424522 kubelet[2518]: E0424 23:53:06.424510 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.424963 kubelet[2518]: E0424 23:53:06.424872 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.424963 kubelet[2518]: W0424 23:53:06.424881 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.424963 kubelet[2518]: E0424 23:53:06.424889 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.425280 kubelet[2518]: E0424 23:53:06.425205 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.425280 kubelet[2518]: W0424 23:53:06.425213 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.425280 kubelet[2518]: E0424 23:53:06.425220 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.425578 kubelet[2518]: E0424 23:53:06.425459 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.425578 kubelet[2518]: W0424 23:53:06.425466 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.425578 kubelet[2518]: E0424 23:53:06.425480 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.427158 kubelet[2518]: E0424 23:53:06.426293 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.427158 kubelet[2518]: W0424 23:53:06.426303 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.427158 kubelet[2518]: E0424 23:53:06.426331 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.427158 kubelet[2518]: I0424 23:53:06.426896 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98a32295-87c4-4c33-bd3a-7a5df06b2711-socket-dir\") pod \"csi-node-driver-x29xn\" (UID: \"98a32295-87c4-4c33-bd3a-7a5df06b2711\") " pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:06.427296 kubelet[2518]: E0424 23:53:06.427289 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.427356 kubelet[2518]: W0424 23:53:06.427348 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.427431 kubelet[2518]: E0424 23:53:06.427422 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.427724 kubelet[2518]: E0424 23:53:06.427714 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.427791 kubelet[2518]: W0424 23:53:06.427783 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.427854 kubelet[2518]: E0424 23:53:06.427847 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.429032 containerd[1469]: time="2026-04-24T23:53:06.428704453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:06.429032 containerd[1469]: time="2026-04-24T23:53:06.428756264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:06.429032 containerd[1469]: time="2026-04-24T23:53:06.428768009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:06.430215 kubelet[2518]: E0424 23:53:06.429802 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.430215 kubelet[2518]: W0424 23:53:06.429819 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.430215 kubelet[2518]: E0424 23:53:06.429875 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.430582 kubelet[2518]: E0424 23:53:06.430539 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.430582 kubelet[2518]: W0424 23:53:06.430561 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.430582 kubelet[2518]: E0424 23:53:06.430574 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.430727 containerd[1469]: time="2026-04-24T23:53:06.428912876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:06.431380 kubelet[2518]: E0424 23:53:06.431321 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.431380 kubelet[2518]: W0424 23:53:06.431335 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.431380 kubelet[2518]: E0424 23:53:06.431345 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.435138 kubelet[2518]: E0424 23:53:06.434412 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.435138 kubelet[2518]: W0424 23:53:06.434731 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.435138 kubelet[2518]: E0424 23:53:06.435003 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.440912 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.443039 kubelet[2518]: W0424 23:53:06.440989 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.441054 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.441430 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.443039 kubelet[2518]: W0424 23:53:06.441438 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.441448 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.441917 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.443039 kubelet[2518]: W0424 23:53:06.441943 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.441953 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.443039 kubelet[2518]: E0424 23:53:06.442187 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446654 kubelet[2518]: W0424 23:53:06.442193 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446654 kubelet[2518]: E0424 23:53:06.442201 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.446654 kubelet[2518]: E0424 23:53:06.442356 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446654 kubelet[2518]: W0424 23:53:06.442361 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446654 kubelet[2518]: E0424 23:53:06.442369 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.446654 kubelet[2518]: E0424 23:53:06.442694 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446654 kubelet[2518]: W0424 23:53:06.442700 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446654 kubelet[2518]: E0424 23:53:06.442706 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.446654 kubelet[2518]: E0424 23:53:06.442832 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446654 kubelet[2518]: W0424 23:53:06.442837 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.442842 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.443004 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446852 kubelet[2518]: W0424 23:53:06.443011 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.443018 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.443277 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446852 kubelet[2518]: W0424 23:53:06.443287 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.443294 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.443433 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.446852 kubelet[2518]: W0424 23:53:06.443438 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.446852 kubelet[2518]: E0424 23:53:06.443480 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.443601 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447655 kubelet[2518]: W0424 23:53:06.443606 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.443611 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.444043 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447655 kubelet[2518]: W0424 23:53:06.444051 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.444058 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.444231 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447655 kubelet[2518]: W0424 23:53:06.444236 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.444242 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447655 kubelet[2518]: E0424 23:53:06.444364 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447816 kubelet[2518]: W0424 23:53:06.444370 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447816 kubelet[2518]: E0424 23:53:06.444376 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447816 kubelet[2518]: E0424 23:53:06.444575 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447816 kubelet[2518]: W0424 23:53:06.444581 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447816 kubelet[2518]: E0424 23:53:06.444586 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447816 kubelet[2518]: E0424 23:53:06.444724 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447816 kubelet[2518]: W0424 23:53:06.444732 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447816 kubelet[2518]: E0424 23:53:06.444737 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447816 kubelet[2518]: E0424 23:53:06.444974 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447816 kubelet[2518]: W0424 23:53:06.444980 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.444986 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.445104 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447986 kubelet[2518]: W0424 23:53:06.445109 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.445115 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.445265 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447986 kubelet[2518]: W0424 23:53:06.445272 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.445286 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.445415 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.447986 kubelet[2518]: W0424 23:53:06.445421 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.447986 kubelet[2518]: E0424 23:53:06.445427 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.448135 kubelet[2518]: E0424 23:53:06.445529 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.448135 kubelet[2518]: W0424 23:53:06.445534 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.448135 kubelet[2518]: E0424 23:53:06.445541 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.448135 kubelet[2518]: E0424 23:53:06.445641 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.448135 kubelet[2518]: W0424 23:53:06.445645 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.448135 kubelet[2518]: E0424 23:53:06.445650 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.448135 kubelet[2518]: E0424 23:53:06.445784 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.448135 kubelet[2518]: W0424 23:53:06.445789 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.448135 kubelet[2518]: E0424 23:53:06.445795 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.449728 kubelet[2518]: E0424 23:53:06.449706 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.449728 kubelet[2518]: W0424 23:53:06.449725 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.449789 kubelet[2518]: E0424 23:53:06.449735 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.468131 systemd[1]: Started cri-containerd-457a7840683753d4836728ae42fc9d58f4e5fa729f8eb2ff24d74dddbb2f1375.scope - libcontainer container 457a7840683753d4836728ae42fc9d58f4e5fa729f8eb2ff24d74dddbb2f1375. Apr 24 23:53:06.540951 containerd[1469]: time="2026-04-24T23:53:06.540826137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cd6cb456-slln4,Uid:a0cb4993-7b75-4751-ab65-cb654e86c9f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"457a7840683753d4836728ae42fc9d58f4e5fa729f8eb2ff24d74dddbb2f1375\"" Apr 24 23:53:06.543152 kubelet[2518]: E0424 23:53:06.543026 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:06.543591 kubelet[2518]: E0424 23:53:06.543541 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.543591 kubelet[2518]: W0424 23:53:06.543565 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.543710 kubelet[2518]: E0424 23:53:06.543594 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.543857 kubelet[2518]: E0424 23:53:06.543823 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.543857 kubelet[2518]: W0424 23:53:06.543836 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.543857 kubelet[2518]: E0424 23:53:06.543844 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.544126 kubelet[2518]: E0424 23:53:06.544117 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.544126 kubelet[2518]: W0424 23:53:06.544123 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.544207 kubelet[2518]: E0424 23:53:06.544130 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.544962 containerd[1469]: time="2026-04-24T23:53:06.544894188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:53:06.545475 kubelet[2518]: E0424 23:53:06.545459 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.545475 kubelet[2518]: W0424 23:53:06.545474 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.545551 kubelet[2518]: E0424 23:53:06.545485 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.545979 kubelet[2518]: E0424 23:53:06.545866 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.545979 kubelet[2518]: W0424 23:53:06.545915 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.545979 kubelet[2518]: E0424 23:53:06.545954 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.546986 kubelet[2518]: E0424 23:53:06.546355 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.546986 kubelet[2518]: W0424 23:53:06.546370 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.546986 kubelet[2518]: E0424 23:53:06.546407 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.546986 kubelet[2518]: E0424 23:53:06.546625 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.546986 kubelet[2518]: W0424 23:53:06.546632 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.546986 kubelet[2518]: E0424 23:53:06.546640 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.546986 kubelet[2518]: E0424 23:53:06.546949 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.546986 kubelet[2518]: W0424 23:53:06.546955 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.546986 kubelet[2518]: E0424 23:53:06.546962 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.547383 kubelet[2518]: E0424 23:53:06.547216 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.547383 kubelet[2518]: W0424 23:53:06.547221 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.547383 kubelet[2518]: E0424 23:53:06.547228 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.547539 kubelet[2518]: E0424 23:53:06.547526 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.547564 kubelet[2518]: W0424 23:53:06.547540 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.547564 kubelet[2518]: E0424 23:53:06.547551 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.548045 kubelet[2518]: E0424 23:53:06.548026 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.548045 kubelet[2518]: W0424 23:53:06.548041 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.548227 kubelet[2518]: E0424 23:53:06.548053 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.550906 kubelet[2518]: E0424 23:53:06.548387 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.550906 kubelet[2518]: W0424 23:53:06.548401 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.550906 kubelet[2518]: E0424 23:53:06.548415 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.550906 kubelet[2518]: E0424 23:53:06.548637 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.550906 kubelet[2518]: W0424 23:53:06.548647 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.550906 kubelet[2518]: E0424 23:53:06.548679 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.559515 kubelet[2518]: E0424 23:53:06.551584 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.559515 kubelet[2518]: W0424 23:53:06.551777 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.559515 kubelet[2518]: E0424 23:53:06.552050 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.559515 kubelet[2518]: E0424 23:53:06.552790 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.559515 kubelet[2518]: W0424 23:53:06.552801 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.559515 kubelet[2518]: E0424 23:53:06.552811 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.559515 kubelet[2518]: E0424 23:53:06.554533 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.559515 kubelet[2518]: W0424 23:53:06.554543 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.559515 kubelet[2518]: E0424 23:53:06.554554 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.567228 kubelet[2518]: E0424 23:53:06.567093 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.567228 kubelet[2518]: W0424 23:53:06.567132 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.567228 kubelet[2518]: E0424 23:53:06.567149 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.567825 kubelet[2518]: E0424 23:53:06.567786 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.567825 kubelet[2518]: W0424 23:53:06.567818 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.568071 kubelet[2518]: E0424 23:53:06.567838 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.571413 kubelet[2518]: E0424 23:53:06.569453 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.571413 kubelet[2518]: W0424 23:53:06.569504 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.571413 kubelet[2518]: E0424 23:53:06.569514 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.572406 kubelet[2518]: E0424 23:53:06.572301 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.572406 kubelet[2518]: W0424 23:53:06.572383 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.572406 kubelet[2518]: E0424 23:53:06.572394 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.598477 kubelet[2518]: E0424 23:53:06.596679 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.598477 kubelet[2518]: W0424 23:53:06.596791 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.598477 kubelet[2518]: E0424 23:53:06.597166 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.598477 kubelet[2518]: E0424 23:53:06.597914 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.598477 kubelet[2518]: W0424 23:53:06.597946 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.598477 kubelet[2518]: E0424 23:53:06.597961 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.598477 kubelet[2518]: E0424 23:53:06.598197 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.598477 kubelet[2518]: W0424 23:53:06.598204 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.598477 kubelet[2518]: E0424 23:53:06.598211 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.599680 kubelet[2518]: E0424 23:53:06.598849 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.599680 kubelet[2518]: W0424 23:53:06.598857 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.599680 kubelet[2518]: E0424 23:53:06.598865 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.601842 kubelet[2518]: E0424 23:53:06.600006 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.601842 kubelet[2518]: W0424 23:53:06.600024 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.601842 kubelet[2518]: E0424 23:53:06.600033 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.609724 kubelet[2518]: E0424 23:53:06.609700 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:06.609724 kubelet[2518]: W0424 23:53:06.609718 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:06.609826 kubelet[2518]: E0424 23:53:06.609730 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:06.636827 containerd[1469]: time="2026-04-24T23:53:06.636783523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rjlrv,Uid:dd5c02b4-e2ed-4947-9d95-eff70746fb37,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:06.674148 containerd[1469]: time="2026-04-24T23:53:06.673869455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:06.676848 containerd[1469]: time="2026-04-24T23:53:06.675641972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:06.676848 containerd[1469]: time="2026-04-24T23:53:06.675909073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:06.676848 containerd[1469]: time="2026-04-24T23:53:06.676467972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:06.706455 systemd[1]: Started cri-containerd-b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33.scope - libcontainer container b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33. Apr 24 23:53:06.824612 containerd[1469]: time="2026-04-24T23:53:06.824334234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rjlrv,Uid:dd5c02b4-e2ed-4947-9d95-eff70746fb37,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\"" Apr 24 23:53:08.110101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281023593.mount: Deactivated successfully. Apr 24 23:53:08.217775 kubelet[2518]: E0424 23:53:08.217594 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:09.037294 containerd[1469]: time="2026-04-24T23:53:09.036874461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:09.039774 containerd[1469]: time="2026-04-24T23:53:09.037998502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:53:09.040200 containerd[1469]: time="2026-04-24T23:53:09.040171414Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:09.042777 containerd[1469]: time="2026-04-24T23:53:09.042705980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:09.043548 containerd[1469]: time="2026-04-24T23:53:09.043517751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.498564448s" Apr 24 23:53:09.043618 containerd[1469]: time="2026-04-24T23:53:09.043579477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:53:09.046327 containerd[1469]: time="2026-04-24T23:53:09.046290338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:53:09.067397 containerd[1469]: time="2026-04-24T23:53:09.067221017Z" level=info msg="CreateContainer within sandbox \"457a7840683753d4836728ae42fc9d58f4e5fa729f8eb2ff24d74dddbb2f1375\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:53:09.086836 containerd[1469]: time="2026-04-24T23:53:09.086689928Z" level=info msg="CreateContainer within sandbox \"457a7840683753d4836728ae42fc9d58f4e5fa729f8eb2ff24d74dddbb2f1375\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"65937d74a442a2894ee48f905d02b715d97622858666946ae49ead887c419aa8\"" Apr 24 23:53:09.088912 containerd[1469]: time="2026-04-24T23:53:09.088888161Z" level=info msg="StartContainer for \"65937d74a442a2894ee48f905d02b715d97622858666946ae49ead887c419aa8\"" Apr 24 23:53:09.160721 systemd[1]: Started cri-containerd-65937d74a442a2894ee48f905d02b715d97622858666946ae49ead887c419aa8.scope - libcontainer container 65937d74a442a2894ee48f905d02b715d97622858666946ae49ead887c419aa8. Apr 24 23:53:09.307027 containerd[1469]: time="2026-04-24T23:53:09.305910151Z" level=info msg="StartContainer for \"65937d74a442a2894ee48f905d02b715d97622858666946ae49ead887c419aa8\" returns successfully" Apr 24 23:53:09.422465 kubelet[2518]: E0424 23:53:09.422307 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:09.522835 kubelet[2518]: E0424 23:53:09.522577 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.522835 kubelet[2518]: W0424 23:53:09.522737 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.526779 kubelet[2518]: E0424 23:53:09.523046 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.528014 kubelet[2518]: E0424 23:53:09.527972 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.528303 kubelet[2518]: W0424 23:53:09.528039 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.528303 kubelet[2518]: E0424 23:53:09.528174 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.531111 kubelet[2518]: E0424 23:53:09.530797 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.531111 kubelet[2518]: W0424 23:53:09.530916 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.531111 kubelet[2518]: E0424 23:53:09.531075 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.536405 kubelet[2518]: E0424 23:53:09.536153 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.537324 kubelet[2518]: W0424 23:53:09.536487 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.537966 kubelet[2518]: E0424 23:53:09.537543 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.540071 kubelet[2518]: E0424 23:53:09.539975 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.540071 kubelet[2518]: W0424 23:53:09.539988 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.540071 kubelet[2518]: E0424 23:53:09.540004 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.540560 kubelet[2518]: E0424 23:53:09.540439 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.540560 kubelet[2518]: W0424 23:53:09.540450 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.540560 kubelet[2518]: E0424 23:53:09.540521 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.540961 kubelet[2518]: E0424 23:53:09.540850 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.540961 kubelet[2518]: W0424 23:53:09.540859 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.540961 kubelet[2518]: E0424 23:53:09.540867 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.541515 kubelet[2518]: E0424 23:53:09.541504 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.541586 kubelet[2518]: W0424 23:53:09.541578 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.541727 kubelet[2518]: E0424 23:53:09.541637 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.547585 kubelet[2518]: E0424 23:53:09.546184 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.547585 kubelet[2518]: W0424 23:53:09.546435 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.551027 kubelet[2518]: E0424 23:53:09.546914 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.559794 kubelet[2518]: E0424 23:53:09.559218 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.559794 kubelet[2518]: W0424 23:53:09.559359 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.559794 kubelet[2518]: E0424 23:53:09.559667 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.561220 kubelet[2518]: E0424 23:53:09.560190 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.561220 kubelet[2518]: W0424 23:53:09.560202 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.561220 kubelet[2518]: E0424 23:53:09.560215 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.561220 kubelet[2518]: E0424 23:53:09.561018 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.561220 kubelet[2518]: W0424 23:53:09.561032 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.561220 kubelet[2518]: E0424 23:53:09.561045 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.562894 kubelet[2518]: E0424 23:53:09.562493 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.562894 kubelet[2518]: W0424 23:53:09.562518 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.562894 kubelet[2518]: E0424 23:53:09.562528 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.579163 kubelet[2518]: E0424 23:53:09.578894 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.579908 kubelet[2518]: W0424 23:53:09.579115 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.579908 kubelet[2518]: E0424 23:53:09.579424 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.582484 kubelet[2518]: E0424 23:53:09.582440 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.582484 kubelet[2518]: W0424 23:53:09.582475 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.582603 kubelet[2518]: E0424 23:53:09.582499 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.586717 kubelet[2518]: E0424 23:53:09.586530 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.586717 kubelet[2518]: W0424 23:53:09.586684 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.586717 kubelet[2518]: E0424 23:53:09.586764 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.588208 kubelet[2518]: E0424 23:53:09.588112 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.588208 kubelet[2518]: W0424 23:53:09.588155 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.588208 kubelet[2518]: E0424 23:53:09.588201 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.590227 kubelet[2518]: E0424 23:53:09.589982 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.590227 kubelet[2518]: W0424 23:53:09.590023 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.590227 kubelet[2518]: E0424 23:53:09.590037 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.590518 kubelet[2518]: E0424 23:53:09.590403 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.590518 kubelet[2518]: W0424 23:53:09.590411 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.590518 kubelet[2518]: E0424 23:53:09.590420 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.590841 kubelet[2518]: E0424 23:53:09.590819 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.590872 kubelet[2518]: W0424 23:53:09.590841 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.590918 kubelet[2518]: E0424 23:53:09.590890 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.591188 kubelet[2518]: E0424 23:53:09.591175 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.591188 kubelet[2518]: W0424 23:53:09.591187 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.591253 kubelet[2518]: E0424 23:53:09.591200 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.591547 kubelet[2518]: E0424 23:53:09.591533 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.591575 kubelet[2518]: W0424 23:53:09.591546 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.591575 kubelet[2518]: E0424 23:53:09.591556 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.591770 kubelet[2518]: E0424 23:53:09.591758 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.591793 kubelet[2518]: W0424 23:53:09.591785 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.591809 kubelet[2518]: E0424 23:53:09.591792 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.592080 kubelet[2518]: E0424 23:53:09.592060 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.592080 kubelet[2518]: W0424 23:53:09.592074 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.592080 kubelet[2518]: E0424 23:53:09.592080 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.592413 kubelet[2518]: E0424 23:53:09.592388 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.592413 kubelet[2518]: W0424 23:53:09.592406 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.592482 kubelet[2518]: E0424 23:53:09.592414 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.594134 kubelet[2518]: E0424 23:53:09.592662 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.594134 kubelet[2518]: W0424 23:53:09.592669 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.594134 kubelet[2518]: E0424 23:53:09.592676 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.594134 kubelet[2518]: E0424 23:53:09.593092 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.594134 kubelet[2518]: W0424 23:53:09.593105 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.594134 kubelet[2518]: E0424 23:53:09.593119 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.597755 kubelet[2518]: E0424 23:53:09.597646 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.597816 kubelet[2518]: W0424 23:53:09.597755 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.597875 kubelet[2518]: E0424 23:53:09.597849 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.606479 kubelet[2518]: E0424 23:53:09.604247 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.606479 kubelet[2518]: W0424 23:53:09.604436 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.606479 kubelet[2518]: E0424 23:53:09.604733 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.608732 kubelet[2518]: E0424 23:53:09.608601 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.608879 kubelet[2518]: W0424 23:53:09.608771 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.609069 kubelet[2518]: E0424 23:53:09.609039 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.612150 kubelet[2518]: E0424 23:53:09.612089 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.612150 kubelet[2518]: W0424 23:53:09.612118 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.612150 kubelet[2518]: E0424 23:53:09.612136 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.612815 kubelet[2518]: E0424 23:53:09.612762 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.612815 kubelet[2518]: W0424 23:53:09.612786 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.612884 kubelet[2518]: E0424 23:53:09.612799 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:09.620254 kubelet[2518]: E0424 23:53:09.619777 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:09.621019 kubelet[2518]: W0424 23:53:09.620242 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:09.621019 kubelet[2518]: E0424 23:53:09.620500 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.217603 kubelet[2518]: E0424 23:53:10.217503 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:10.429023 kubelet[2518]: I0424 23:53:10.428020 2518 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:53:10.429023 kubelet[2518]: E0424 23:53:10.428833 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:10.520257 kubelet[2518]: E0424 23:53:10.519775 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.522568 kubelet[2518]: W0424 23:53:10.522370 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.522875 kubelet[2518]: E0424 23:53:10.522764 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.524302 kubelet[2518]: E0424 23:53:10.524194 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.524431 kubelet[2518]: W0424 23:53:10.524309 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.524479 kubelet[2518]: E0424 23:53:10.524444 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.524754 kubelet[2518]: E0424 23:53:10.524738 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.524754 kubelet[2518]: W0424 23:53:10.524749 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.524793 kubelet[2518]: E0424 23:53:10.524767 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.525095 kubelet[2518]: E0424 23:53:10.525075 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.525095 kubelet[2518]: W0424 23:53:10.525088 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.525095 kubelet[2518]: E0424 23:53:10.525095 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.525367 kubelet[2518]: E0424 23:53:10.525355 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.525367 kubelet[2518]: W0424 23:53:10.525366 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.525430 kubelet[2518]: E0424 23:53:10.525372 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.525749 kubelet[2518]: E0424 23:53:10.525729 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.525871 kubelet[2518]: W0424 23:53:10.525850 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.525950 kubelet[2518]: E0424 23:53:10.525873 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.526218 kubelet[2518]: E0424 23:53:10.526204 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.526218 kubelet[2518]: W0424 23:53:10.526217 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.526278 kubelet[2518]: E0424 23:53:10.526226 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.526419 kubelet[2518]: E0424 23:53:10.526394 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.526419 kubelet[2518]: W0424 23:53:10.526407 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.526419 kubelet[2518]: E0424 23:53:10.526414 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.526598 kubelet[2518]: E0424 23:53:10.526586 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.526598 kubelet[2518]: W0424 23:53:10.526596 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.526643 kubelet[2518]: E0424 23:53:10.526603 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.526764 kubelet[2518]: E0424 23:53:10.526753 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.526764 kubelet[2518]: W0424 23:53:10.526762 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.526800 kubelet[2518]: E0424 23:53:10.526768 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.526940 kubelet[2518]: E0424 23:53:10.526909 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.526979 kubelet[2518]: W0424 23:53:10.526943 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.526979 kubelet[2518]: E0424 23:53:10.526950 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.527120 kubelet[2518]: E0424 23:53:10.527107 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.527120 kubelet[2518]: W0424 23:53:10.527119 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.527158 kubelet[2518]: E0424 23:53:10.527125 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.527292 kubelet[2518]: E0424 23:53:10.527281 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.527292 kubelet[2518]: W0424 23:53:10.527291 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.527328 kubelet[2518]: E0424 23:53:10.527301 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.527456 kubelet[2518]: E0424 23:53:10.527444 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.527456 kubelet[2518]: W0424 23:53:10.527454 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.527493 kubelet[2518]: E0424 23:53:10.527463 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.527605 kubelet[2518]: E0424 23:53:10.527594 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.528477 kubelet[2518]: W0424 23:53:10.528438 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.528477 kubelet[2518]: E0424 23:53:10.528472 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.531487 kubelet[2518]: E0424 23:53:10.531339 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.531487 kubelet[2518]: W0424 23:53:10.531354 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.531487 kubelet[2518]: E0424 23:53:10.531365 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.533998 kubelet[2518]: E0424 23:53:10.533880 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.533998 kubelet[2518]: W0424 23:53:10.533891 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.533998 kubelet[2518]: E0424 23:53:10.533902 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.534547 kubelet[2518]: E0424 23:53:10.534530 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.534547 kubelet[2518]: W0424 23:53:10.534545 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.534625 kubelet[2518]: E0424 23:53:10.534555 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.534791 kubelet[2518]: E0424 23:53:10.534765 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.534791 kubelet[2518]: W0424 23:53:10.534780 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.534791 kubelet[2518]: E0424 23:53:10.534788 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.535024 kubelet[2518]: E0424 23:53:10.535010 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.535042 kubelet[2518]: W0424 23:53:10.535023 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.535042 kubelet[2518]: E0424 23:53:10.535032 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.535247 kubelet[2518]: E0424 23:53:10.535236 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.535247 kubelet[2518]: W0424 23:53:10.535246 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.535283 kubelet[2518]: E0424 23:53:10.535252 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.535397 kubelet[2518]: E0424 23:53:10.535385 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.535397 kubelet[2518]: W0424 23:53:10.535395 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.535429 kubelet[2518]: E0424 23:53:10.535401 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.535697 kubelet[2518]: E0424 23:53:10.535675 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.535697 kubelet[2518]: W0424 23:53:10.535687 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.535697 kubelet[2518]: E0424 23:53:10.535693 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.536061 kubelet[2518]: E0424 23:53:10.536007 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.536061 kubelet[2518]: W0424 23:53:10.536025 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.536061 kubelet[2518]: E0424 23:53:10.536038 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.536257 kubelet[2518]: E0424 23:53:10.536217 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.536257 kubelet[2518]: W0424 23:53:10.536222 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.536257 kubelet[2518]: E0424 23:53:10.536228 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.536385 kubelet[2518]: E0424 23:53:10.536366 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.536385 kubelet[2518]: W0424 23:53:10.536376 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.536385 kubelet[2518]: E0424 23:53:10.536382 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.536522 kubelet[2518]: E0424 23:53:10.536510 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.536522 kubelet[2518]: W0424 23:53:10.536520 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.536561 kubelet[2518]: E0424 23:53:10.536525 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.536747 kubelet[2518]: E0424 23:53:10.536734 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.536747 kubelet[2518]: W0424 23:53:10.536745 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.536778 kubelet[2518]: E0424 23:53:10.536751 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.536992 kubelet[2518]: E0424 23:53:10.536980 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.536992 kubelet[2518]: W0424 23:53:10.536991 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.537025 kubelet[2518]: E0424 23:53:10.536998 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.537255 kubelet[2518]: E0424 23:53:10.537225 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.537255 kubelet[2518]: W0424 23:53:10.537242 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.537255 kubelet[2518]: E0424 23:53:10.537251 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.537411 kubelet[2518]: E0424 23:53:10.537399 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.537411 kubelet[2518]: W0424 23:53:10.537410 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.537441 kubelet[2518]: E0424 23:53:10.537416 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.537697 kubelet[2518]: E0424 23:53:10.537683 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.537716 kubelet[2518]: W0424 23:53:10.537696 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.537716 kubelet[2518]: E0424 23:53:10.537704 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:10.537908 kubelet[2518]: E0424 23:53:10.537894 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:53:10.537908 kubelet[2518]: W0424 23:53:10.537904 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:53:10.537908 kubelet[2518]: E0424 23:53:10.537910 2518 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:53:11.276984 containerd[1469]: time="2026-04-24T23:53:11.276770292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:11.279557 containerd[1469]: time="2026-04-24T23:53:11.279352571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:53:11.282383 containerd[1469]: time="2026-04-24T23:53:11.282351975Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:11.288417 containerd[1469]: time="2026-04-24T23:53:11.288114360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:11.294984 containerd[1469]: time="2026-04-24T23:53:11.294863999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.248534539s" Apr 24 23:53:11.295172 containerd[1469]: time="2026-04-24T23:53:11.295067073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:53:11.369467 containerd[1469]: time="2026-04-24T23:53:11.369377868Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:53:11.414633 containerd[1469]: time="2026-04-24T23:53:11.414166564Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8\"" Apr 24 23:53:11.426518 containerd[1469]: time="2026-04-24T23:53:11.426259767Z" level=info msg="StartContainer for \"42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8\"" Apr 24 23:53:11.614982 systemd[1]: Started cri-containerd-42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8.scope - libcontainer container 42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8. Apr 24 23:53:11.646306 containerd[1469]: time="2026-04-24T23:53:11.646259043Z" level=info msg="StartContainer for \"42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8\" returns successfully" Apr 24 23:53:11.712445 systemd[1]: cri-containerd-42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8.scope: Deactivated successfully. Apr 24 23:53:11.770731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8-rootfs.mount: Deactivated successfully. Apr 24 23:53:11.803396 containerd[1469]: time="2026-04-24T23:53:11.799012174Z" level=info msg="shim disconnected" id=42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8 namespace=k8s.io Apr 24 23:53:11.804528 containerd[1469]: time="2026-04-24T23:53:11.803507765Z" level=warning msg="cleaning up after shim disconnected" id=42f8662b177388b293e433d2b8b3ae5f6f172714de1a4b69ccb82066066a4fb8 namespace=k8s.io Apr 24 23:53:11.804528 containerd[1469]: time="2026-04-24T23:53:11.803592777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:53:12.219122 kubelet[2518]: E0424 23:53:12.218690 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:12.573691 containerd[1469]: time="2026-04-24T23:53:12.572101156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:53:12.635350 kubelet[2518]: I0424 23:53:12.634633 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-75cd6cb456-slln4" podStartSLOduration=5.133089451 podStartE2EDuration="7.634547817s" podCreationTimestamp="2026-04-24 23:53:05 +0000 UTC" firstStartedPulling="2026-04-24 23:53:06.544541706 +0000 UTC m=+17.608665023" lastFinishedPulling="2026-04-24 23:53:09.046000083 +0000 UTC m=+20.110123389" observedRunningTime="2026-04-24 23:53:09.582247861 +0000 UTC m=+20.646371194" watchObservedRunningTime="2026-04-24 23:53:12.634547817 +0000 UTC m=+23.698671143" Apr 24 23:53:14.217359 kubelet[2518]: E0424 23:53:14.217202 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:15.864438 kubelet[2518]: I0424 23:53:15.864290 2518 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:53:15.866065 kubelet[2518]: E0424 23:53:15.865670 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:16.218775 kubelet[2518]: E0424 23:53:16.218686 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:16.595794 kubelet[2518]: E0424 23:53:16.595233 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:18.220599 kubelet[2518]: E0424 23:53:18.219342 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:19.993535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598431556.mount: Deactivated successfully. Apr 24 23:53:20.134476 containerd[1469]: time="2026-04-24T23:53:20.134317761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:53:20.146328 containerd[1469]: time="2026-04-24T23:53:20.146278988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.57357859s" Apr 24 23:53:20.146328 containerd[1469]: time="2026-04-24T23:53:20.146334598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:53:20.150402 containerd[1469]: time="2026-04-24T23:53:20.150362392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:20.156084 containerd[1469]: time="2026-04-24T23:53:20.154234685Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:53:20.161751 containerd[1469]: time="2026-04-24T23:53:20.161529711Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:20.174833 containerd[1469]: time="2026-04-24T23:53:20.174177032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:20.219822 kubelet[2518]: E0424 23:53:20.219362 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:20.240151 containerd[1469]: time="2026-04-24T23:53:20.240062108Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c\"" Apr 24 23:53:20.241024 containerd[1469]: time="2026-04-24T23:53:20.240992456Z" level=info msg="StartContainer for \"2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c\"" Apr 24 23:53:20.345791 systemd[1]: Started cri-containerd-2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c.scope - libcontainer container 2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c. Apr 24 23:53:20.468758 containerd[1469]: time="2026-04-24T23:53:20.468488488Z" level=info msg="StartContainer for \"2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c\" returns successfully" Apr 24 23:53:20.646255 systemd[1]: cri-containerd-2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c.scope: Deactivated successfully. Apr 24 23:53:20.724105 containerd[1469]: time="2026-04-24T23:53:20.723836618Z" level=info msg="shim disconnected" id=2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c namespace=k8s.io Apr 24 23:53:20.724105 containerd[1469]: time="2026-04-24T23:53:20.724038643Z" level=warning msg="cleaning up after shim disconnected" id=2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c namespace=k8s.io Apr 24 23:53:20.724105 containerd[1469]: time="2026-04-24T23:53:20.724047660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:53:20.996362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f64d2a3f80563b44566053fd5c8567cafb58f1b40b8f98b43471345ead9b35c-rootfs.mount: Deactivated successfully. Apr 24 23:53:21.660588 containerd[1469]: time="2026-04-24T23:53:21.660399645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:53:22.219049 kubelet[2518]: E0424 23:53:22.218755 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:24.218077 kubelet[2518]: E0424 23:53:24.217842 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:25.944872 containerd[1469]: time="2026-04-24T23:53:25.944586944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:25.946656 containerd[1469]: time="2026-04-24T23:53:25.945005060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:53:25.946656 containerd[1469]: time="2026-04-24T23:53:25.946192959Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:25.948102 containerd[1469]: time="2026-04-24T23:53:25.948067342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:25.948809 containerd[1469]: time="2026-04-24T23:53:25.948765525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.288296212s" Apr 24 23:53:25.948809 containerd[1469]: time="2026-04-24T23:53:25.948801512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:53:25.960887 containerd[1469]: time="2026-04-24T23:53:25.960664431Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:53:25.997913 containerd[1469]: time="2026-04-24T23:53:25.997558524Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5\"" Apr 24 23:53:26.001694 containerd[1469]: time="2026-04-24T23:53:26.001640682Z" level=info msg="StartContainer for \"8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5\"" Apr 24 23:53:26.083155 systemd[1]: Started cri-containerd-8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5.scope - libcontainer container 8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5. Apr 24 23:53:26.118477 containerd[1469]: time="2026-04-24T23:53:26.118212922Z" level=info msg="StartContainer for \"8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5\" returns successfully" Apr 24 23:53:26.221473 kubelet[2518]: E0424 23:53:26.219813 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:27.041439 systemd[1]: cri-containerd-8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5.scope: Deactivated successfully. Apr 24 23:53:27.041891 systemd[1]: cri-containerd-8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5.scope: Consumed 1.029s CPU time. Apr 24 23:53:27.062336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5-rootfs.mount: Deactivated successfully. Apr 24 23:53:27.091375 kubelet[2518]: I0424 23:53:27.090304 2518 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 24 23:53:27.117108 containerd[1469]: time="2026-04-24T23:53:27.116629728Z" level=info msg="shim disconnected" id=8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5 namespace=k8s.io Apr 24 23:53:27.117108 containerd[1469]: time="2026-04-24T23:53:27.116961476Z" level=warning msg="cleaning up after shim disconnected" id=8be1c4bab09d5966455a3e4d8ce743ca7b73f8d4c7793b3d192472e124c3ccd5 namespace=k8s.io Apr 24 23:53:27.117108 containerd[1469]: time="2026-04-24T23:53:27.116977630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:53:27.221833 systemd[1]: Created slice kubepods-burstable-pod2654d15d_7c11_4173_8201_2dab36e1b04b.slice - libcontainer container kubepods-burstable-pod2654d15d_7c11_4173_8201_2dab36e1b04b.slice. Apr 24 23:53:27.241162 systemd[1]: Created slice kubepods-burstable-pod707b52a6_210e_4b12_ba60_391f0fd35951.slice - libcontainer container kubepods-burstable-pod707b52a6_210e_4b12_ba60_391f0fd35951.slice. Apr 24 23:53:27.245656 kubelet[2518]: I0424 23:53:27.244524 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ccd9b5aa-d456-4c30-851f-fc449a59f911-goldmane-key-pair\") pod \"goldmane-9f7667bb8-xjf5d\" (UID: \"ccd9b5aa-d456-4c30-851f-fc449a59f911\") " pod="calico-system/goldmane-9f7667bb8-xjf5d" Apr 24 23:53:27.245656 kubelet[2518]: I0424 23:53:27.244615 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db8622af-35ea-4a80-8494-1d6f8141a50e-tigera-ca-bundle\") pod \"calico-kube-controllers-64c5b888c4-bkhcl\" (UID: \"db8622af-35ea-4a80-8494-1d6f8141a50e\") " pod="calico-system/calico-kube-controllers-64c5b888c4-bkhcl" Apr 24 23:53:27.245656 kubelet[2518]: I0424 23:53:27.244629 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdzf\" (UniqueName: \"kubernetes.io/projected/db8622af-35ea-4a80-8494-1d6f8141a50e-kube-api-access-mcdzf\") pod \"calico-kube-controllers-64c5b888c4-bkhcl\" (UID: \"db8622af-35ea-4a80-8494-1d6f8141a50e\") " pod="calico-system/calico-kube-controllers-64c5b888c4-bkhcl" Apr 24 23:53:27.245656 kubelet[2518]: I0424 23:53:27.244686 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqrt\" (UniqueName: \"kubernetes.io/projected/2654d15d-7c11-4173-8201-2dab36e1b04b-kube-api-access-8bqrt\") pod \"coredns-7d764666f9-cq94j\" (UID: \"2654d15d-7c11-4173-8201-2dab36e1b04b\") " pod="kube-system/coredns-7d764666f9-cq94j" Apr 24 23:53:27.245656 kubelet[2518]: I0424 23:53:27.244702 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-nginx-config\") pod \"whisker-85894b7686-49tvq\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " pod="calico-system/whisker-85894b7686-49tvq" Apr 24 23:53:27.246457 kubelet[2518]: I0424 23:53:27.244718 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair\") pod \"whisker-85894b7686-49tvq\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " pod="calico-system/whisker-85894b7686-49tvq" Apr 24 23:53:27.246457 kubelet[2518]: I0424 23:53:27.244743 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccd9b5aa-d456-4c30-851f-fc449a59f911-config\") pod \"goldmane-9f7667bb8-xjf5d\" (UID: \"ccd9b5aa-d456-4c30-851f-fc449a59f911\") " pod="calico-system/goldmane-9f7667bb8-xjf5d" Apr 24 23:53:27.246457 kubelet[2518]: I0424 23:53:27.244760 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-ca-bundle\") pod \"whisker-85894b7686-49tvq\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " pod="calico-system/whisker-85894b7686-49tvq" Apr 24 23:53:27.246457 kubelet[2518]: I0424 23:53:27.244774 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4ed747e3-6b68-48ad-8996-db64a1a66b08-calico-apiserver-certs\") pod \"calico-apiserver-5c6597b7cd-t7h5t\" (UID: \"4ed747e3-6b68-48ad-8996-db64a1a66b08\") " pod="calico-system/calico-apiserver-5c6597b7cd-t7h5t" Apr 24 23:53:27.246457 kubelet[2518]: I0424 23:53:27.244787 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmll\" (UniqueName: \"kubernetes.io/projected/ccd9b5aa-d456-4c30-851f-fc449a59f911-kube-api-access-8gmll\") pod \"goldmane-9f7667bb8-xjf5d\" (UID: \"ccd9b5aa-d456-4c30-851f-fc449a59f911\") " pod="calico-system/goldmane-9f7667bb8-xjf5d" Apr 24 23:53:27.246611 kubelet[2518]: I0424 23:53:27.244829 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/707b52a6-210e-4b12-ba60-391f0fd35951-config-volume\") pod \"coredns-7d764666f9-zlthf\" (UID: \"707b52a6-210e-4b12-ba60-391f0fd35951\") " pod="kube-system/coredns-7d764666f9-zlthf" Apr 24 23:53:27.246611 kubelet[2518]: I0424 23:53:27.244845 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nrg\" (UniqueName: \"kubernetes.io/projected/98059560-e73c-45fa-bdb8-c35ea535eaa4-kube-api-access-d7nrg\") pod \"calico-apiserver-5c6597b7cd-2bsfc\" (UID: \"98059560-e73c-45fa-bdb8-c35ea535eaa4\") " pod="calico-system/calico-apiserver-5c6597b7cd-2bsfc" Apr 24 23:53:27.246611 kubelet[2518]: I0424 23:53:27.244862 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wtw\" (UniqueName: \"kubernetes.io/projected/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-kube-api-access-c7wtw\") pod \"whisker-85894b7686-49tvq\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " pod="calico-system/whisker-85894b7686-49tvq" Apr 24 23:53:27.246611 kubelet[2518]: I0424 23:53:27.244876 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrcck\" (UniqueName: \"kubernetes.io/projected/707b52a6-210e-4b12-ba60-391f0fd35951-kube-api-access-rrcck\") pod \"coredns-7d764666f9-zlthf\" (UID: \"707b52a6-210e-4b12-ba60-391f0fd35951\") " pod="kube-system/coredns-7d764666f9-zlthf" Apr 24 23:53:27.246611 kubelet[2518]: I0424 23:53:27.244888 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98059560-e73c-45fa-bdb8-c35ea535eaa4-calico-apiserver-certs\") pod \"calico-apiserver-5c6597b7cd-2bsfc\" (UID: \"98059560-e73c-45fa-bdb8-c35ea535eaa4\") " pod="calico-system/calico-apiserver-5c6597b7cd-2bsfc" Apr 24 23:53:27.246704 kubelet[2518]: I0424 23:53:27.244901 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccd9b5aa-d456-4c30-851f-fc449a59f911-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-xjf5d\" (UID: \"ccd9b5aa-d456-4c30-851f-fc449a59f911\") " pod="calico-system/goldmane-9f7667bb8-xjf5d" Apr 24 23:53:27.246704 kubelet[2518]: I0424 23:53:27.244911 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2654d15d-7c11-4173-8201-2dab36e1b04b-config-volume\") pod \"coredns-7d764666f9-cq94j\" (UID: \"2654d15d-7c11-4173-8201-2dab36e1b04b\") " pod="kube-system/coredns-7d764666f9-cq94j" Apr 24 23:53:27.246704 kubelet[2518]: I0424 23:53:27.244955 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjtz\" (UniqueName: \"kubernetes.io/projected/4ed747e3-6b68-48ad-8996-db64a1a66b08-kube-api-access-rkjtz\") pod \"calico-apiserver-5c6597b7cd-t7h5t\" (UID: \"4ed747e3-6b68-48ad-8996-db64a1a66b08\") " pod="calico-system/calico-apiserver-5c6597b7cd-t7h5t" Apr 24 23:53:27.315694 systemd[1]: Created slice kubepods-besteffort-pod2ff72427_bc1b_4a96_86a8_7d0aff3ddae5.slice - libcontainer container kubepods-besteffort-pod2ff72427_bc1b_4a96_86a8_7d0aff3ddae5.slice. Apr 24 23:53:27.338824 systemd[1]: Created slice kubepods-besteffort-podccd9b5aa_d456_4c30_851f_fc449a59f911.slice - libcontainer container kubepods-besteffort-podccd9b5aa_d456_4c30_851f_fc449a59f911.slice. Apr 24 23:53:27.371353 systemd[1]: Created slice kubepods-besteffort-pod98059560_e73c_45fa_bdb8_c35ea535eaa4.slice - libcontainer container kubepods-besteffort-pod98059560_e73c_45fa_bdb8_c35ea535eaa4.slice. Apr 24 23:53:27.425835 systemd[1]: Created slice kubepods-besteffort-pod4ed747e3_6b68_48ad_8996_db64a1a66b08.slice - libcontainer container kubepods-besteffort-pod4ed747e3_6b68_48ad_8996_db64a1a66b08.slice. Apr 24 23:53:27.446566 containerd[1469]: time="2026-04-24T23:53:27.446071636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-t7h5t,Uid:4ed747e3-6b68-48ad-8996-db64a1a66b08,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:27.447250 systemd[1]: Created slice kubepods-besteffort-poddb8622af_35ea_4a80_8494_1d6f8141a50e.slice - libcontainer container kubepods-besteffort-poddb8622af_35ea_4a80_8494_1d6f8141a50e.slice. Apr 24 23:53:27.464384 containerd[1469]: time="2026-04-24T23:53:27.463534605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c5b888c4-bkhcl,Uid:db8622af-35ea-4a80-8494-1d6f8141a50e,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:27.538839 kubelet[2518]: E0424 23:53:27.538563 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:27.545949 containerd[1469]: time="2026-04-24T23:53:27.544903626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cq94j,Uid:2654d15d-7c11-4173-8201-2dab36e1b04b,Namespace:kube-system,Attempt:0,}" Apr 24 23:53:27.568131 kubelet[2518]: E0424 23:53:27.567430 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:27.581567 containerd[1469]: time="2026-04-24T23:53:27.581242419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zlthf,Uid:707b52a6-210e-4b12-ba60-391f0fd35951,Namespace:kube-system,Attempt:0,}" Apr 24 23:53:27.652746 containerd[1469]: time="2026-04-24T23:53:27.652376806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85894b7686-49tvq,Uid:2ff72427-bc1b-4a96-86a8-7d0aff3ddae5,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:27.696303 containerd[1469]: time="2026-04-24T23:53:27.696165771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xjf5d,Uid:ccd9b5aa-d456-4c30-851f-fc449a59f911,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:27.697463 containerd[1469]: time="2026-04-24T23:53:27.696170050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-2bsfc,Uid:98059560-e73c-45fa-bdb8-c35ea535eaa4,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:27.851679 containerd[1469]: time="2026-04-24T23:53:27.850314329Z" level=error msg="Failed to destroy network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.851679 containerd[1469]: time="2026-04-24T23:53:27.851229627Z" level=error msg="encountered an error cleaning up failed sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.851679 containerd[1469]: time="2026-04-24T23:53:27.851414222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cq94j,Uid:2654d15d-7c11-4173-8201-2dab36e1b04b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.855085 containerd[1469]: time="2026-04-24T23:53:27.854898848Z" level=error msg="Failed to destroy network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.902118 containerd[1469]: time="2026-04-24T23:53:27.901904869Z" level=error msg="encountered an error cleaning up failed sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.903123 containerd[1469]: time="2026-04-24T23:53:27.903091435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-t7h5t,Uid:4ed747e3-6b68-48ad-8996-db64a1a66b08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.908789 containerd[1469]: time="2026-04-24T23:53:27.908720107Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:53:27.912417 containerd[1469]: time="2026-04-24T23:53:27.911532479Z" level=error msg="Failed to destroy network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.925386 containerd[1469]: time="2026-04-24T23:53:27.925313857Z" level=error msg="encountered an error cleaning up failed sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.925386 containerd[1469]: time="2026-04-24T23:53:27.925396909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c5b888c4-bkhcl,Uid:db8622af-35ea-4a80-8494-1d6f8141a50e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.946069 kubelet[2518]: E0424 23:53:27.945739 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.946846 kubelet[2518]: E0424 23:53:27.946141 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cq94j" Apr 24 23:53:27.946846 kubelet[2518]: E0424 23:53:27.946230 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cq94j" Apr 24 23:53:27.946846 kubelet[2518]: E0424 23:53:27.946521 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-cq94j_kube-system(2654d15d-7c11-4173-8201-2dab36e1b04b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-cq94j_kube-system(2654d15d-7c11-4173-8201-2dab36e1b04b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cq94j" podUID="2654d15d-7c11-4173-8201-2dab36e1b04b" Apr 24 23:53:27.951182 kubelet[2518]: E0424 23:53:27.948584 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.951182 kubelet[2518]: E0424 23:53:27.948583 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:27.951182 kubelet[2518]: E0424 23:53:27.950566 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c5b888c4-bkhcl" Apr 24 23:53:27.951182 kubelet[2518]: E0424 23:53:27.950539 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c6597b7cd-t7h5t" Apr 24 23:53:27.952097 kubelet[2518]: E0424 23:53:27.950663 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c5b888c4-bkhcl" Apr 24 23:53:27.952097 kubelet[2518]: E0424 23:53:27.950736 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c6597b7cd-t7h5t" Apr 24 23:53:27.952097 kubelet[2518]: E0424 23:53:27.951037 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64c5b888c4-bkhcl_calico-system(db8622af-35ea-4a80-8494-1d6f8141a50e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64c5b888c4-bkhcl_calico-system(db8622af-35ea-4a80-8494-1d6f8141a50e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64c5b888c4-bkhcl" podUID="db8622af-35ea-4a80-8494-1d6f8141a50e" Apr 24 23:53:27.975115 kubelet[2518]: E0424 23:53:27.951214 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6597b7cd-t7h5t_calico-system(4ed747e3-6b68-48ad-8996-db64a1a66b08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6597b7cd-t7h5t_calico-system(4ed747e3-6b68-48ad-8996-db64a1a66b08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c6597b7cd-t7h5t" podUID="4ed747e3-6b68-48ad-8996-db64a1a66b08" Apr 24 23:53:27.977539 containerd[1469]: time="2026-04-24T23:53:27.977332957Z" level=info msg="CreateContainer within sandbox \"b9c3967c754abe4a9ecec84459775448e88a7ecdf67a4cc65a45bdd02a533b33\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a0fb3ea15099c8ff854159ab5aa853bf73d355f745f293271f89f1d36554e02\"" Apr 24 23:53:27.983818 containerd[1469]: time="2026-04-24T23:53:27.983161176Z" level=info msg="StartContainer for \"6a0fb3ea15099c8ff854159ab5aa853bf73d355f745f293271f89f1d36554e02\"" Apr 24 23:53:28.017320 containerd[1469]: time="2026-04-24T23:53:28.016980940Z" level=error msg="Failed to destroy network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.019063 containerd[1469]: time="2026-04-24T23:53:28.019026650Z" level=error msg="encountered an error cleaning up failed sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.023861 containerd[1469]: time="2026-04-24T23:53:28.022169927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zlthf,Uid:707b52a6-210e-4b12-ba60-391f0fd35951,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.028721 kubelet[2518]: E0424 23:53:28.027914 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.032642 kubelet[2518]: E0424 23:53:28.031384 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-zlthf" Apr 24 23:53:28.034559 kubelet[2518]: E0424 23:53:28.033218 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-zlthf" Apr 24 23:53:28.039160 kubelet[2518]: E0424 23:53:28.038255 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-zlthf_kube-system(707b52a6-210e-4b12-ba60-391f0fd35951)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-zlthf_kube-system(707b52a6-210e-4b12-ba60-391f0fd35951)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-zlthf" podUID="707b52a6-210e-4b12-ba60-391f0fd35951" Apr 24 23:53:28.058121 containerd[1469]: time="2026-04-24T23:53:28.057836447Z" level=error msg="Failed to destroy network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.059791 containerd[1469]: time="2026-04-24T23:53:28.059747820Z" level=error msg="encountered an error cleaning up failed sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.066651 containerd[1469]: time="2026-04-24T23:53:28.066500361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xjf5d,Uid:ccd9b5aa-d456-4c30-851f-fc449a59f911,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.069518 kubelet[2518]: E0424 23:53:28.069282 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.069785 kubelet[2518]: E0424 23:53:28.069696 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-xjf5d" Apr 24 23:53:28.069785 kubelet[2518]: E0424 23:53:28.069752 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-xjf5d" Apr 24 23:53:28.071747 kubelet[2518]: E0424 23:53:28.071476 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-xjf5d_calico-system(ccd9b5aa-d456-4c30-851f-fc449a59f911)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-xjf5d_calico-system(ccd9b5aa-d456-4c30-851f-fc449a59f911)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-xjf5d" podUID="ccd9b5aa-d456-4c30-851f-fc449a59f911" Apr 24 23:53:28.127468 containerd[1469]: time="2026-04-24T23:53:28.126529335Z" level=error msg="Failed to destroy network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.135051 containerd[1469]: time="2026-04-24T23:53:28.134617351Z" level=error msg="encountered an error cleaning up failed sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.136272 containerd[1469]: time="2026-04-24T23:53:28.135299381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85894b7686-49tvq,Uid:2ff72427-bc1b-4a96-86a8-7d0aff3ddae5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.137209 kubelet[2518]: E0424 23:53:28.136970 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.137454 kubelet[2518]: E0424 23:53:28.137417 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85894b7686-49tvq" Apr 24 23:53:28.137542 kubelet[2518]: E0424 23:53:28.137504 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85894b7686-49tvq" Apr 24 23:53:28.137516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b-shm.mount: Deactivated successfully. Apr 24 23:53:28.138142 kubelet[2518]: E0424 23:53:28.137756 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85894b7686-49tvq_calico-system(2ff72427-bc1b-4a96-86a8-7d0aff3ddae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85894b7686-49tvq_calico-system(2ff72427-bc1b-4a96-86a8-7d0aff3ddae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85894b7686-49tvq" podUID="2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" Apr 24 23:53:28.148637 systemd[1]: Started cri-containerd-6a0fb3ea15099c8ff854159ab5aa853bf73d355f745f293271f89f1d36554e02.scope - libcontainer container 6a0fb3ea15099c8ff854159ab5aa853bf73d355f745f293271f89f1d36554e02. Apr 24 23:53:28.210669 containerd[1469]: time="2026-04-24T23:53:28.210321824Z" level=error msg="Failed to destroy network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.212219 containerd[1469]: time="2026-04-24T23:53:28.212185231Z" level=error msg="encountered an error cleaning up failed sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.212954 containerd[1469]: time="2026-04-24T23:53:28.212574206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-2bsfc,Uid:98059560-e73c-45fa-bdb8-c35ea535eaa4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.215014 kubelet[2518]: E0424 23:53:28.213904 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.215014 kubelet[2518]: E0424 23:53:28.214265 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c6597b7cd-2bsfc" Apr 24 23:53:28.215014 kubelet[2518]: E0424 23:53:28.214289 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c6597b7cd-2bsfc" Apr 24 23:53:28.214579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8-shm.mount: Deactivated successfully. Apr 24 23:53:28.215320 kubelet[2518]: E0424 23:53:28.214436 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6597b7cd-2bsfc_calico-system(98059560-e73c-45fa-bdb8-c35ea535eaa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6597b7cd-2bsfc_calico-system(98059560-e73c-45fa-bdb8-c35ea535eaa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c6597b7cd-2bsfc" podUID="98059560-e73c-45fa-bdb8-c35ea535eaa4" Apr 24 23:53:28.247668 systemd[1]: Created slice kubepods-besteffort-pod98a32295_87c4_4c33_bd3a_7a5df06b2711.slice - libcontainer container kubepods-besteffort-pod98a32295_87c4_4c33_bd3a_7a5df06b2711.slice. Apr 24 23:53:28.262004 containerd[1469]: time="2026-04-24T23:53:28.261644534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x29xn,Uid:98a32295-87c4-4c33-bd3a-7a5df06b2711,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:28.268069 containerd[1469]: time="2026-04-24T23:53:28.267621461Z" level=info msg="StartContainer for \"6a0fb3ea15099c8ff854159ab5aa853bf73d355f745f293271f89f1d36554e02\" returns successfully" Apr 24 23:53:28.416747 containerd[1469]: time="2026-04-24T23:53:28.416239665Z" level=error msg="Failed to destroy network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.419040 containerd[1469]: time="2026-04-24T23:53:28.418371756Z" level=error msg="encountered an error cleaning up failed sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.420961 containerd[1469]: time="2026-04-24T23:53:28.420728491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x29xn,Uid:98a32295-87c4-4c33-bd3a-7a5df06b2711,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.424849 kubelet[2518]: E0424 23:53:28.423043 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:53:28.424849 kubelet[2518]: E0424 23:53:28.423525 2518 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:28.424849 kubelet[2518]: E0424 23:53:28.424043 2518 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x29xn" Apr 24 23:53:28.427561 kubelet[2518]: E0424 23:53:28.424405 2518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x29xn_calico-system(98a32295-87c4-4c33-bd3a-7a5df06b2711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x29xn_calico-system(98a32295-87c4-4c33-bd3a-7a5df06b2711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x29xn" podUID="98a32295-87c4-4c33-bd3a-7a5df06b2711" Apr 24 23:53:28.800052 kubelet[2518]: I0424 23:53:28.799971 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:28.804048 kubelet[2518]: I0424 23:53:28.804012 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:28.811384 containerd[1469]: time="2026-04-24T23:53:28.810786036Z" level=info msg="StopPodSandbox for \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\"" Apr 24 23:53:28.815942 containerd[1469]: time="2026-04-24T23:53:28.813766562Z" level=info msg="Ensure that sandbox 6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867 in task-service has been cleanup successfully" Apr 24 23:53:28.815942 containerd[1469]: time="2026-04-24T23:53:28.814453750Z" level=info msg="StopPodSandbox for \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\"" Apr 24 23:53:28.815942 containerd[1469]: time="2026-04-24T23:53:28.815226758Z" level=info msg="Ensure that sandbox 86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2 in task-service has been cleanup successfully" Apr 24 23:53:28.819125 kubelet[2518]: I0424 23:53:28.819103 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:28.823913 containerd[1469]: time="2026-04-24T23:53:28.823865929Z" level=info msg="StopPodSandbox for \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\"" Apr 24 23:53:28.824203 containerd[1469]: time="2026-04-24T23:53:28.824161769Z" level=info msg="Ensure that sandbox 56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8 in task-service has been cleanup successfully" Apr 24 23:53:28.824287 kubelet[2518]: I0424 23:53:28.824188 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:28.826176 containerd[1469]: time="2026-04-24T23:53:28.825850168Z" level=info msg="StopPodSandbox for \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\"" Apr 24 23:53:28.826176 containerd[1469]: time="2026-04-24T23:53:28.826017764Z" level=info msg="Ensure that sandbox 8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0 in task-service has been cleanup successfully" Apr 24 23:53:28.846844 kubelet[2518]: I0424 23:53:28.846729 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:28.849672 containerd[1469]: time="2026-04-24T23:53:28.848722092Z" level=info msg="StopPodSandbox for \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\"" Apr 24 23:53:28.850119 containerd[1469]: time="2026-04-24T23:53:28.849333268Z" level=info msg="Ensure that sandbox 5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8 in task-service has been cleanup successfully" Apr 24 23:53:28.850538 kubelet[2518]: I0424 23:53:28.850409 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:28.851703 kubelet[2518]: I0424 23:53:28.851687 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:28.852738 containerd[1469]: time="2026-04-24T23:53:28.852707792Z" level=info msg="StopPodSandbox for \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\"" Apr 24 23:53:28.852953 containerd[1469]: time="2026-04-24T23:53:28.852868069Z" level=info msg="Ensure that sandbox 8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b in task-service has been cleanup successfully" Apr 24 23:53:28.853632 containerd[1469]: time="2026-04-24T23:53:28.853581900Z" level=info msg="StopPodSandbox for \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\"" Apr 24 23:53:28.853718 containerd[1469]: time="2026-04-24T23:53:28.853700942Z" level=info msg="Ensure that sandbox 5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a in task-service has been cleanup successfully" Apr 24 23:53:28.953231 kubelet[2518]: I0424 23:53:28.949988 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:29.029773 kubelet[2518]: I0424 23:53:29.029437 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-rjlrv" podStartSLOduration=2.067831642 podStartE2EDuration="23.029383365s" podCreationTimestamp="2026-04-24 23:53:06 +0000 UTC" firstStartedPulling="2026-04-24 23:53:06.830443286 +0000 UTC m=+17.894566594" lastFinishedPulling="2026-04-24 23:53:27.79199501 +0000 UTC m=+38.856118317" observedRunningTime="2026-04-24 23:53:29.025068508 +0000 UTC m=+40.089191816" watchObservedRunningTime="2026-04-24 23:53:29.029383365 +0000 UTC m=+40.093506671" Apr 24 23:53:29.041620 containerd[1469]: time="2026-04-24T23:53:29.040779601Z" level=info msg="StopPodSandbox for \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\"" Apr 24 23:53:29.067557 containerd[1469]: time="2026-04-24T23:53:29.066124328Z" level=info msg="Ensure that sandbox 1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd in task-service has been cleanup successfully" Apr 24 23:53:29.069802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0-shm.mount: Deactivated successfully. Apr 24 23:53:29.175745 systemd[1]: run-containerd-runc-k8s.io-6a0fb3ea15099c8ff854159ab5aa853bf73d355f745f293271f89f1d36554e02-runc.umKJ8I.mount: Deactivated successfully. Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.162 [INFO][3836] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.163 [INFO][3836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" iface="eth0" netns="/var/run/netns/cni-b8581bc4-14a4-31ae-d2c5-b71c3d8b8679" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.164 [INFO][3836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" iface="eth0" netns="/var/run/netns/cni-b8581bc4-14a4-31ae-d2c5-b71c3d8b8679" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.164 [INFO][3836] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" iface="eth0" netns="/var/run/netns/cni-b8581bc4-14a4-31ae-d2c5-b71c3d8b8679" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.164 [INFO][3836] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.164 [INFO][3836] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.427 [INFO][3958] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.434 [INFO][3958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.435 [INFO][3958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.542 [WARNING][3958] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.542 [INFO][3958] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.555 [INFO][3958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:29.577322 containerd[1469]: 2026-04-24 23:53:29.571 [INFO][3836] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.209 [INFO][3928] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.209 [INFO][3928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" iface="eth0" netns="/var/run/netns/cni-f2ff2770-1de1-b2f2-30aa-3e74beadf66b" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.209 [INFO][3928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" iface="eth0" netns="/var/run/netns/cni-f2ff2770-1de1-b2f2-30aa-3e74beadf66b" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.209 [INFO][3928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" iface="eth0" netns="/var/run/netns/cni-f2ff2770-1de1-b2f2-30aa-3e74beadf66b" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.209 [INFO][3928] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.209 [INFO][3928] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.448 [INFO][3992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.450 [INFO][3992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.556 [INFO][3992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.569 [WARNING][3992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.569 [INFO][3992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.571 [INFO][3992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:29.579712 containerd[1469]: 2026-04-24 23:53:29.574 [INFO][3928] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:29.583735 systemd[1]: run-netns-cni\x2df2ff2770\x2d1de1\x2db2f2\x2d30aa\x2d3e74beadf66b.mount: Deactivated successfully. Apr 24 23:53:29.584105 systemd[1]: run-netns-cni\x2db8581bc4\x2d14a4\x2d31ae\x2dd2c5\x2db71c3d8b8679.mount: Deactivated successfully. Apr 24 23:53:29.586251 containerd[1469]: time="2026-04-24T23:53:29.584980136Z" level=info msg="TearDown network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\" successfully" Apr 24 23:53:29.586251 containerd[1469]: time="2026-04-24T23:53:29.585018920Z" level=info msg="StopPodSandbox for \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\" returns successfully" Apr 24 23:53:29.591373 containerd[1469]: time="2026-04-24T23:53:29.591235025Z" level=info msg="TearDown network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\" successfully" Apr 24 23:53:29.591373 containerd[1469]: time="2026-04-24T23:53:29.591455377Z" level=info msg="StopPodSandbox for \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\" returns successfully" Apr 24 23:53:29.603865 kubelet[2518]: E0424 23:53:29.603692 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.198 [INFO][3835] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.199 [INFO][3835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" iface="eth0" netns="/var/run/netns/cni-e2a9bed6-41a0-54f2-9f6a-c7e3cc4cde3c" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.199 [INFO][3835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" iface="eth0" netns="/var/run/netns/cni-e2a9bed6-41a0-54f2-9f6a-c7e3cc4cde3c" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.200 [INFO][3835] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" iface="eth0" netns="/var/run/netns/cni-e2a9bed6-41a0-54f2-9f6a-c7e3cc4cde3c" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.200 [INFO][3835] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.200 [INFO][3835] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.557 [INFO][3978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.558 [INFO][3978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.572 [INFO][3978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.590 [WARNING][3978] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.590 [INFO][3978] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.599 [INFO][3978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:29.635089 containerd[1469]: 2026-04-24 23:53:29.611 [INFO][3835] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:29.638741 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:48896.service - OpenSSH per-connection server daemon (10.0.0.1:48896). Apr 24 23:53:29.642135 systemd[1]: run-netns-cni\x2de2a9bed6\x2d41a0\x2d54f2\x2d9f6a\x2dc7e3cc4cde3c.mount: Deactivated successfully. Apr 24 23:53:29.646859 containerd[1469]: time="2026-04-24T23:53:29.646614641Z" level=info msg="TearDown network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\" successfully" Apr 24 23:53:29.646859 containerd[1469]: time="2026-04-24T23:53:29.646759900Z" level=info msg="StopPodSandbox for \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\" returns successfully" Apr 24 23:53:29.663047 containerd[1469]: time="2026-04-24T23:53:29.660587600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xjf5d,Uid:ccd9b5aa-d456-4c30-851f-fc449a59f911,Namespace:calico-system,Attempt:1,}" Apr 24 23:53:29.664383 containerd[1469]: time="2026-04-24T23:53:29.664342759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cq94j,Uid:2654d15d-7c11-4173-8201-2dab36e1b04b,Namespace:kube-system,Attempt:1,}" Apr 24 23:53:29.671329 containerd[1469]: time="2026-04-24T23:53:29.671105848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c5b888c4-bkhcl,Uid:db8622af-35ea-4a80-8494-1d6f8141a50e,Namespace:calico-system,Attempt:1,}" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.267 [INFO][3870] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.268 [INFO][3870] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" iface="eth0" netns="/var/run/netns/cni-b2888fe5-aa68-d74b-bf5b-d989b9cef4d3" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.268 [INFO][3870] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" iface="eth0" netns="/var/run/netns/cni-b2888fe5-aa68-d74b-bf5b-d989b9cef4d3" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.271 [INFO][3870] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" iface="eth0" netns="/var/run/netns/cni-b2888fe5-aa68-d74b-bf5b-d989b9cef4d3" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.271 [INFO][3870] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.271 [INFO][3870] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.567 [INFO][4010] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.567 [INFO][4010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.601 [INFO][4010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.667 [WARNING][4010] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.683 [INFO][4010] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.746 [INFO][4010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:29.781625 containerd[1469]: 2026-04-24 23:53:29.769 [INFO][3870] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:29.784525 containerd[1469]: time="2026-04-24T23:53:29.783214802Z" level=info msg="TearDown network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\" successfully" Apr 24 23:53:29.789314 containerd[1469]: time="2026-04-24T23:53:29.789091764Z" level=info msg="StopPodSandbox for \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\" returns successfully" Apr 24 23:53:29.794922 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 48896 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:29.805923 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:29.810666 containerd[1469]: time="2026-04-24T23:53:29.806830302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x29xn,Uid:98a32295-87c4-4c33-bd3a-7a5df06b2711,Namespace:calico-system,Attempt:1,}" Apr 24 23:53:29.826971 systemd-logind[1458]: New session 8 of user core. Apr 24 23:53:29.834379 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.198 [INFO][3889] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.198 [INFO][3889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" iface="eth0" netns="/var/run/netns/cni-723dca88-d288-5724-ef53-31ec0c03a6e4" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.198 [INFO][3889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" iface="eth0" netns="/var/run/netns/cni-723dca88-d288-5724-ef53-31ec0c03a6e4" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.199 [INFO][3889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" iface="eth0" netns="/var/run/netns/cni-723dca88-d288-5724-ef53-31ec0c03a6e4" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.199 [INFO][3889] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.199 [INFO][3889] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.571 [INFO][3985] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.572 [INFO][3985] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.747 [INFO][3985] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.935 [WARNING][3985] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.937 [INFO][3985] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.947 [INFO][3985] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:29.973488 containerd[1469]: 2026-04-24 23:53:29.960 [INFO][3889] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:29.975245 containerd[1469]: time="2026-04-24T23:53:29.975137253Z" level=info msg="TearDown network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\" successfully" Apr 24 23:53:29.975400 containerd[1469]: time="2026-04-24T23:53:29.975368059Z" level=info msg="StopPodSandbox for \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\" returns successfully" Apr 24 23:53:29.990142 containerd[1469]: time="2026-04-24T23:53:29.989115469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-2bsfc,Uid:98059560-e73c-45fa-bdb8-c35ea535eaa4,Namespace:calico-system,Attempt:1,}" Apr 24 23:53:30.084070 systemd[1]: run-netns-cni\x2db2888fe5\x2daa68\x2dd74b\x2dbf5b\x2dd989b9cef4d3.mount: Deactivated successfully. Apr 24 23:53:30.084323 systemd[1]: run-netns-cni\x2d723dca88\x2dd288\x2d5724\x2def53\x2d31ec0c03a6e4.mount: Deactivated successfully. Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.231 [INFO][3834] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.260 [INFO][3834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" iface="eth0" netns="/var/run/netns/cni-b3c869b7-a709-7a08-9b91-c8e3a96fd5d7" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.261 [INFO][3834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" iface="eth0" netns="/var/run/netns/cni-b3c869b7-a709-7a08-9b91-c8e3a96fd5d7" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.270 [INFO][3834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" iface="eth0" netns="/var/run/netns/cni-b3c869b7-a709-7a08-9b91-c8e3a96fd5d7" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.270 [INFO][3834] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.270 [INFO][3834] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.703 [INFO][4006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.704 [INFO][4006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.949 [INFO][4006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.993 [WARNING][4006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:29.994 [INFO][4006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:30.030 [INFO][4006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:30.113248 containerd[1469]: 2026-04-24 23:53:30.076 [INFO][3834] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:30.120783 containerd[1469]: time="2026-04-24T23:53:30.118650316Z" level=info msg="TearDown network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\" successfully" Apr 24 23:53:30.120783 containerd[1469]: time="2026-04-24T23:53:30.120753426Z" level=info msg="StopPodSandbox for \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\" returns successfully" Apr 24 23:53:30.120263 systemd[1]: run-netns-cni\x2db3c869b7\x2da709\x2d7a08\x2d9b91\x2dc8e3a96fd5d7.mount: Deactivated successfully. Apr 24 23:53:30.146400 containerd[1469]: time="2026-04-24T23:53:30.146290594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-t7h5t,Uid:4ed747e3-6b68-48ad-8996-db64a1a66b08,Namespace:calico-system,Attempt:1,}" Apr 24 23:53:30.279123 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:30.285292 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:48896.service: Deactivated successfully. Apr 24 23:53:30.291170 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:53:30.293335 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:53:30.295365 systemd-logind[1458]: Removed session 8. Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.388 [INFO][3895] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.394 [INFO][3895] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" iface="eth0" netns="/var/run/netns/cni-4745eb16-ae5d-5cb4-07fe-b3eacfd20c64" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.394 [INFO][3895] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" iface="eth0" netns="/var/run/netns/cni-4745eb16-ae5d-5cb4-07fe-b3eacfd20c64" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.395 [INFO][3895] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" iface="eth0" netns="/var/run/netns/cni-4745eb16-ae5d-5cb4-07fe-b3eacfd20c64" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.395 [INFO][3895] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.395 [INFO][3895] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.706 [INFO][4029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:29.706 [INFO][4029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:30.033 [INFO][4029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:30.295 [WARNING][4029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:30.295 [INFO][4029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:30.312 [INFO][4029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:30.364186 containerd[1469]: 2026-04-24 23:53:30.341 [INFO][3895] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:30.369473 containerd[1469]: time="2026-04-24T23:53:30.368770800Z" level=info msg="TearDown network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\" successfully" Apr 24 23:53:30.369473 containerd[1469]: time="2026-04-24T23:53:30.368881432Z" level=info msg="StopPodSandbox for \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\" returns successfully" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.280 [INFO][3917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.280 [INFO][3917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" iface="eth0" netns="/var/run/netns/cni-00a99707-471e-006b-338a-383fce8e1e65" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.289 [INFO][3917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" iface="eth0" netns="/var/run/netns/cni-00a99707-471e-006b-338a-383fce8e1e65" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.303 [INFO][3917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" iface="eth0" netns="/var/run/netns/cni-00a99707-471e-006b-338a-383fce8e1e65" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.304 [INFO][3917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.304 [INFO][3917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.707 [INFO][4020] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:29.709 [INFO][4020] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:30.314 [INFO][4020] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:30.343 [WARNING][4020] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:30.344 [INFO][4020] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:30.355 [INFO][4020] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:30.376506 containerd[1469]: 2026-04-24 23:53:30.367 [INFO][3917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:30.377122 containerd[1469]: time="2026-04-24T23:53:30.377097267Z" level=info msg="TearDown network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\" successfully" Apr 24 23:53:30.377208 containerd[1469]: time="2026-04-24T23:53:30.377198234Z" level=info msg="StopPodSandbox for \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\" returns successfully" Apr 24 23:53:30.379137 kubelet[2518]: E0424 23:53:30.379113 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:30.385303 containerd[1469]: time="2026-04-24T23:53:30.385210143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zlthf,Uid:707b52a6-210e-4b12-ba60-391f0fd35951,Namespace:kube-system,Attempt:1,}" Apr 24 23:53:30.454106 kubelet[2518]: I0424 23:53:30.450263 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair\") pod \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " Apr 24 23:53:30.454106 kubelet[2518]: I0424 23:53:30.450456 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-nginx-config\" (UniqueName: \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-nginx-config\") pod \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " Apr 24 23:53:30.454106 kubelet[2518]: I0424 23:53:30.450538 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-kube-api-access-c7wtw\" (UniqueName: \"kubernetes.io/projected/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-kube-api-access-c7wtw\") pod \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " Apr 24 23:53:30.454106 kubelet[2518]: I0424 23:53:30.450595 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-ca-bundle\") pod \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " Apr 24 23:53:30.454106 kubelet[2518]: I0424 23:53:30.451698 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-ca-bundle" pod "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" (UID: "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:53:30.542405 kubelet[2518]: I0424 23:53:30.533769 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-nginx-config" pod "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" (UID: "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:53:30.553721 kubelet[2518]: I0424 23:53:30.553373 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair" pod "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" (UID: "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:53:30.557864 kubelet[2518]: I0424 23:53:30.557467 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair\") pod \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\" (UID: \"2ff72427-bc1b-4a96-86a8-7d0aff3ddae5\") " Apr 24 23:53:30.557864 kubelet[2518]: I0424 23:53:30.557674 2518 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 24 23:53:30.557864 kubelet[2518]: I0424 23:53:30.557693 2518 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 24 23:53:30.557864 kubelet[2518]: W0424 23:53:30.557399 2518 empty_dir.go:505] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5/volumes/kubernetes.io~secret/whisker-backend-key-pair Apr 24 23:53:30.557864 kubelet[2518]: I0424 23:53:30.557782 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair" pod "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" (UID: "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:53:30.583252 kubelet[2518]: I0424 23:53:30.583043 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-kube-api-access-c7wtw" pod "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" (UID: "2ff72427-bc1b-4a96-86a8-7d0aff3ddae5"). InnerVolumeSpecName "kube-api-access-c7wtw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:53:30.660406 kubelet[2518]: I0424 23:53:30.659124 2518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c7wtw\" (UniqueName: \"kubernetes.io/projected/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-kube-api-access-c7wtw\") on node \"localhost\" DevicePath \"\"" Apr 24 23:53:30.660406 kubelet[2518]: I0424 23:53:30.659162 2518 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 24 23:53:30.772309 systemd-networkd[1402]: calia8b739a6006: Link UP Apr 24 23:53:30.772494 systemd-networkd[1402]: calia8b739a6006: Gained carrier Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:29.997 [ERROR][4052] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.215 [INFO][4052] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0 goldmane-9f7667bb8- calico-system ccd9b5aa-d456-4c30-851f-fc449a59f911 922 0 2026-04-24 23:53:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-xjf5d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia8b739a6006 [] [] }} ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.216 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.529 [INFO][4167] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" HandleID="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.565 [INFO][4167] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" HandleID="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-xjf5d", "timestamp":"2026-04-24 23:53:30.529451522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00075e000)} Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.567 [INFO][4167] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.567 [INFO][4167] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.568 [INFO][4167] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.574 [INFO][4167] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.647 [INFO][4167] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.681 [INFO][4167] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.690 [INFO][4167] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.699 [INFO][4167] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.699 [INFO][4167] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.701 [INFO][4167] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.710 [INFO][4167] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.728 [INFO][4167] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.728 [INFO][4167] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" host="localhost" Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.729 [INFO][4167] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:30.840711 containerd[1469]: 2026-04-24 23:53:30.729 [INFO][4167] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" HandleID="k8s-pod-network.fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.845129 containerd[1469]: 2026-04-24 23:53:30.749 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ccd9b5aa-d456-4c30-851f-fc449a59f911", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-xjf5d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8b739a6006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:30.845129 containerd[1469]: 2026-04-24 23:53:30.749 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.845129 containerd[1469]: 2026-04-24 23:53:30.749 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8b739a6006 ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.845129 containerd[1469]: 2026-04-24 23:53:30.771 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.845129 containerd[1469]: 2026-04-24 23:53:30.772 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ccd9b5aa-d456-4c30-851f-fc449a59f911", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf", Pod:"goldmane-9f7667bb8-xjf5d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8b739a6006", MAC:"4a:b3:e8:6a:58:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:30.845129 containerd[1469]: 2026-04-24 23:53:30.816 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf" Namespace="calico-system" Pod="goldmane-9f7667bb8-xjf5d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:30.883620 systemd-networkd[1402]: calib5821818801: Link UP Apr 24 23:53:30.886266 systemd-networkd[1402]: calib5821818801: Gained carrier Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.027 [ERROR][4050] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.132 [INFO][4050] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--cq94j-eth0 coredns-7d764666f9- kube-system 2654d15d-7c11-4173-8201-2dab36e1b04b 919 0 2026-04-24 23:52:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-cq94j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib5821818801 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.133 [INFO][4050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.575 [INFO][4164] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" HandleID="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.653 [INFO][4164] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" HandleID="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000548450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-cq94j", "timestamp":"2026-04-24 23:53:30.575140467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000422420)} Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.653 [INFO][4164] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.729 [INFO][4164] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.729 [INFO][4164] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.737 [INFO][4164] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.763 [INFO][4164] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.807 [INFO][4164] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.815 [INFO][4164] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.825 [INFO][4164] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.825 [INFO][4164] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.831 [INFO][4164] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0 Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.842 [INFO][4164] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.865 [INFO][4164] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.865 [INFO][4164] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" host="localhost" Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.865 [INFO][4164] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:30.917638 containerd[1469]: 2026-04-24 23:53:30.866 [INFO][4164] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" HandleID="k8s-pod-network.e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.919495 containerd[1469]: 2026-04-24 23:53:30.871 [INFO][4050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cq94j-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2654d15d-7c11-4173-8201-2dab36e1b04b", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-cq94j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5821818801", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:30.919495 containerd[1469]: 2026-04-24 23:53:30.872 [INFO][4050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.919495 containerd[1469]: 2026-04-24 23:53:30.872 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5821818801 ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.919495 containerd[1469]: 2026-04-24 23:53:30.887 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.919495 containerd[1469]: 2026-04-24 23:53:30.888 [INFO][4050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cq94j-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2654d15d-7c11-4173-8201-2dab36e1b04b", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0", Pod:"coredns-7d764666f9-cq94j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5821818801", MAC:"ca:b9:65:cf:55:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:30.919495 containerd[1469]: 2026-04-24 23:53:30.908 [INFO][4050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0" Namespace="kube-system" Pod="coredns-7d764666f9-cq94j" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:30.962749 systemd-networkd[1402]: cali2882a89644c: Link UP Apr 24 23:53:30.962916 systemd-networkd[1402]: cali2882a89644c: Gained carrier Apr 24 23:53:30.999576 containerd[1469]: time="2026-04-24T23:53:30.995920036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:30.999576 containerd[1469]: time="2026-04-24T23:53:30.996027627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:30.999576 containerd[1469]: time="2026-04-24T23:53:30.996038710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:30.999576 containerd[1469]: time="2026-04-24T23:53:30.996175295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.013476 containerd[1469]: time="2026-04-24T23:53:31.012429117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:31.013476 containerd[1469]: time="2026-04-24T23:53:31.012664997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:31.003207 systemd[1]: Removed slice kubepods-besteffort-pod2ff72427_bc1b_4a96_86a8_7d0aff3ddae5.slice - libcontainer container kubepods-besteffort-pod2ff72427_bc1b_4a96_86a8_7d0aff3ddae5.slice. Apr 24 23:53:31.022229 containerd[1469]: time="2026-04-24T23:53:31.021699692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.033214 containerd[1469]: time="2026-04-24T23:53:31.032867529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:29.987 [ERROR][4054] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.205 [INFO][4054] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0 calico-kube-controllers-64c5b888c4- calico-system db8622af-35ea-4a80-8494-1d6f8141a50e 920 0 2026-04-24 23:53:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64c5b888c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64c5b888c4-bkhcl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2882a89644c [] [] }} ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.206 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.707 [INFO][4195] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" HandleID="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.735 [INFO][4195] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" HandleID="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001180b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64c5b888c4-bkhcl", "timestamp":"2026-04-24 23:53:30.707626129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000599b80)} Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.736 [INFO][4195] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.867 [INFO][4195] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.867 [INFO][4195] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.882 [INFO][4195] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.894 [INFO][4195] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.899 [INFO][4195] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.911 [INFO][4195] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.919 [INFO][4195] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.919 [INFO][4195] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.925 [INFO][4195] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172 Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.932 [INFO][4195] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.945 [INFO][4195] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.945 [INFO][4195] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" host="localhost" Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.946 [INFO][4195] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:31.041871 containerd[1469]: 2026-04-24 23:53:30.946 [INFO][4195] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" HandleID="k8s-pod-network.9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.043231 containerd[1469]: 2026-04-24 23:53:30.956 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0", GenerateName:"calico-kube-controllers-64c5b888c4-", Namespace:"calico-system", SelfLink:"", UID:"db8622af-35ea-4a80-8494-1d6f8141a50e", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c5b888c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64c5b888c4-bkhcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2882a89644c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:31.043231 containerd[1469]: 2026-04-24 23:53:30.956 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.043231 containerd[1469]: 2026-04-24 23:53:30.956 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2882a89644c ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.043231 containerd[1469]: 2026-04-24 23:53:30.962 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.043231 containerd[1469]: 2026-04-24 23:53:30.963 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0", GenerateName:"calico-kube-controllers-64c5b888c4-", Namespace:"calico-system", SelfLink:"", UID:"db8622af-35ea-4a80-8494-1d6f8141a50e", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c5b888c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172", Pod:"calico-kube-controllers-64c5b888c4-bkhcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2882a89644c", MAC:"16:5f:a1:55:70:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:31.043231 containerd[1469]: 2026-04-24 23:53:31.020 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172" Namespace="calico-system" Pod="calico-kube-controllers-64c5b888c4-bkhcl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:31.072168 systemd[1]: Started cri-containerd-e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0.scope - libcontainer container e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0. Apr 24 23:53:31.081259 systemd[1]: run-netns-cni\x2d00a99707\x2d471e\x2d006b\x2d338a\x2d383fce8e1e65.mount: Deactivated successfully. Apr 24 23:53:31.081456 systemd[1]: run-netns-cni\x2d4745eb16\x2dae5d\x2d5cb4\x2d07fe\x2db3eacfd20c64.mount: Deactivated successfully. Apr 24 23:53:31.081528 systemd[1]: var-lib-kubelet-pods-2ff72427\x2dbc1b\x2d4a96\x2d86a8\x2d7d0aff3ddae5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc7wtw.mount: Deactivated successfully. Apr 24 23:53:31.081631 systemd[1]: var-lib-kubelet-pods-2ff72427\x2dbc1b\x2d4a96\x2d86a8\x2d7d0aff3ddae5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:53:31.137153 systemd[1]: Started cri-containerd-fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf.scope - libcontainer container fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf. Apr 24 23:53:31.200812 containerd[1469]: time="2026-04-24T23:53:31.200641457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:31.200812 containerd[1469]: time="2026-04-24T23:53:31.200700601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:31.200812 containerd[1469]: time="2026-04-24T23:53:31.200722433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.204887 containerd[1469]: time="2026-04-24T23:53:31.200843935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.277192 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:31.280515 kubelet[2518]: I0424 23:53:31.279109 2518 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2ff72427-bc1b-4a96-86a8-7d0aff3ddae5" path="/var/lib/kubelet/pods/2ff72427-bc1b-4a96-86a8-7d0aff3ddae5/volumes" Apr 24 23:53:31.338987 systemd[1]: Created slice kubepods-besteffort-pod855fc80f_32f0_41b1_863d_3c08005010f1.slice - libcontainer container kubepods-besteffort-pod855fc80f_32f0_41b1_863d_3c08005010f1.slice. Apr 24 23:53:31.437270 systemd[1]: Started cri-containerd-9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172.scope - libcontainer container 9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172. Apr 24 23:53:31.441889 kubelet[2518]: I0424 23:53:31.441793 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/855fc80f-32f0-41b1-863d-3c08005010f1-whisker-backend-key-pair\") pod \"whisker-7655d7cd55-pzfck\" (UID: \"855fc80f-32f0-41b1-863d-3c08005010f1\") " pod="calico-system/whisker-7655d7cd55-pzfck" Apr 24 23:53:31.442102 kubelet[2518]: I0424 23:53:31.441919 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/855fc80f-32f0-41b1-863d-3c08005010f1-nginx-config\") pod \"whisker-7655d7cd55-pzfck\" (UID: \"855fc80f-32f0-41b1-863d-3c08005010f1\") " pod="calico-system/whisker-7655d7cd55-pzfck" Apr 24 23:53:31.442102 kubelet[2518]: I0424 23:53:31.441972 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7hj\" (UniqueName: \"kubernetes.io/projected/855fc80f-32f0-41b1-863d-3c08005010f1-kube-api-access-ls7hj\") pod \"whisker-7655d7cd55-pzfck\" (UID: \"855fc80f-32f0-41b1-863d-3c08005010f1\") " pod="calico-system/whisker-7655d7cd55-pzfck" Apr 24 23:53:31.442102 kubelet[2518]: I0424 23:53:31.441996 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855fc80f-32f0-41b1-863d-3c08005010f1-whisker-ca-bundle\") pod \"whisker-7655d7cd55-pzfck\" (UID: \"855fc80f-32f0-41b1-863d-3c08005010f1\") " pod="calico-system/whisker-7655d7cd55-pzfck" Apr 24 23:53:31.508302 containerd[1469]: time="2026-04-24T23:53:31.506728288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cq94j,Uid:2654d15d-7c11-4173-8201-2dab36e1b04b,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0\"" Apr 24 23:53:31.528774 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:31.540369 kubelet[2518]: E0424 23:53:31.533507 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:31.634055 containerd[1469]: time="2026-04-24T23:53:31.633952712Z" level=info msg="CreateContainer within sandbox \"e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:53:31.758451 systemd-networkd[1402]: calic4fc1cf0e70: Link UP Apr 24 23:53:31.762967 containerd[1469]: time="2026-04-24T23:53:31.757848780Z" level=info msg="CreateContainer within sandbox \"e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2aa2dddb3223499f8ef5c811780da8d8b19b9480f0bf47a54e0b571be19e77ba\"" Apr 24 23:53:31.768646 containerd[1469]: time="2026-04-24T23:53:31.768582433Z" level=info msg="StartContainer for \"2aa2dddb3223499f8ef5c811780da8d8b19b9480f0bf47a54e0b571be19e77ba\"" Apr 24 23:53:31.774957 containerd[1469]: time="2026-04-24T23:53:31.773126580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xjf5d,Uid:ccd9b5aa-d456-4c30-851f-fc449a59f911,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf\"" Apr 24 23:53:31.780375 containerd[1469]: time="2026-04-24T23:53:31.780356364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:53:31.785441 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:31.811673 systemd-networkd[1402]: calic4fc1cf0e70: Gained carrier Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.329 [ERROR][4090] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.357 [INFO][4090] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x29xn-eth0 csi-node-driver- calico-system 98a32295-87c4-4c33-bd3a-7a5df06b2711 925 0 2026-04-24 23:53:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x29xn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4fc1cf0e70 [] [] }} ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.358 [INFO][4090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.813 [INFO][4278] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" HandleID="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.833 [INFO][4278] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" HandleID="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000498330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x29xn", "timestamp":"2026-04-24 23:53:30.813681426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000337600)} Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.833 [INFO][4278] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.946 [INFO][4278] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:30.946 [INFO][4278] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.032 [INFO][4278] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.150 [INFO][4278] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.268 [INFO][4278] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.319 [INFO][4278] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.449 [INFO][4278] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.449 [INFO][4278] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.468 [INFO][4278] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99 Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.506 [INFO][4278] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.643 [INFO][4278] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.647 [INFO][4278] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" host="localhost" Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.651 [INFO][4278] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:31.836536 containerd[1469]: 2026-04-24 23:53:31.652 [INFO][4278] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" HandleID="k8s-pod-network.49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.837726 containerd[1469]: 2026-04-24 23:53:31.713 [INFO][4090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x29xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98a32295-87c4-4c33-bd3a-7a5df06b2711", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x29xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4fc1cf0e70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:31.837726 containerd[1469]: 2026-04-24 23:53:31.713 [INFO][4090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.837726 containerd[1469]: 2026-04-24 23:53:31.721 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4fc1cf0e70 ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.837726 containerd[1469]: 2026-04-24 23:53:31.770 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.837726 containerd[1469]: 2026-04-24 23:53:31.773 [INFO][4090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x29xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98a32295-87c4-4c33-bd3a-7a5df06b2711", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99", Pod:"csi-node-driver-x29xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4fc1cf0e70", MAC:"3e:fe:df:d9:73:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:31.837726 containerd[1469]: 2026-04-24 23:53:31.814 [INFO][4090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99" Namespace="calico-system" Pod="csi-node-driver-x29xn" WorkloadEndpoint="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:31.867616 systemd[1]: Started cri-containerd-2aa2dddb3223499f8ef5c811780da8d8b19b9480f0bf47a54e0b571be19e77ba.scope - libcontainer container 2aa2dddb3223499f8ef5c811780da8d8b19b9480f0bf47a54e0b571be19e77ba. Apr 24 23:53:31.884249 systemd-networkd[1402]: cali7d909ce8779: Link UP Apr 24 23:53:31.902805 systemd-networkd[1402]: cali7d909ce8779: Gained carrier Apr 24 23:53:31.921690 containerd[1469]: time="2026-04-24T23:53:31.921257175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:31.921690 containerd[1469]: time="2026-04-24T23:53:31.921376045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:31.921690 containerd[1469]: time="2026-04-24T23:53:31.921389895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.922852 containerd[1469]: time="2026-04-24T23:53:31.921858196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:30.379 [ERROR][4124] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:30.640 [INFO][4124] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0 calico-apiserver-5c6597b7cd- calico-system 98059560-e73c-45fa-bdb8-c35ea535eaa4 921 0 2026-04-24 23:53:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c6597b7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c6597b7cd-2bsfc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7d909ce8779 [] [] }} ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:30.640 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:30.917 [INFO][4324] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" HandleID="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:30.929 [INFO][4324] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" HandleID="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f21e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5c6597b7cd-2bsfc", "timestamp":"2026-04-24 23:53:30.917719334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e4000)} Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:30.929 [INFO][4324] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.647 [INFO][4324] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.647 [INFO][4324] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.679 [INFO][4324] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.693 [INFO][4324] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.718 [INFO][4324] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.740 [INFO][4324] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.760 [INFO][4324] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.760 [INFO][4324] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.795 [INFO][4324] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.835 [INFO][4324] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.857 [INFO][4324] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.857 [INFO][4324] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" host="localhost" Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.867 [INFO][4324] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:31.943301 containerd[1469]: 2026-04-24 23:53:31.867 [INFO][4324] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" HandleID="k8s-pod-network.61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.947627 containerd[1469]: 2026-04-24 23:53:31.879 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"98059560-e73c-45fa-bdb8-c35ea535eaa4", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c6597b7cd-2bsfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7d909ce8779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:31.947627 containerd[1469]: 2026-04-24 23:53:31.879 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.947627 containerd[1469]: 2026-04-24 23:53:31.879 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d909ce8779 ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.947627 containerd[1469]: 2026-04-24 23:53:31.908 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.947627 containerd[1469]: 2026-04-24 23:53:31.909 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"98059560-e73c-45fa-bdb8-c35ea535eaa4", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b", Pod:"calico-apiserver-5c6597b7cd-2bsfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7d909ce8779", MAC:"7e:49:92:5a:6f:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:31.947627 containerd[1469]: 2026-04-24 23:53:31.932 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-2bsfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:31.951577 containerd[1469]: time="2026-04-24T23:53:31.951104164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c5b888c4-bkhcl,Uid:db8622af-35ea-4a80-8494-1d6f8141a50e,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172\"" Apr 24 23:53:31.962785 containerd[1469]: time="2026-04-24T23:53:31.962509282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7655d7cd55-pzfck,Uid:855fc80f-32f0-41b1-863d-3c08005010f1,Namespace:calico-system,Attempt:0,}" Apr 24 23:53:32.005322 containerd[1469]: time="2026-04-24T23:53:32.003875574Z" level=info msg="StartContainer for \"2aa2dddb3223499f8ef5c811780da8d8b19b9480f0bf47a54e0b571be19e77ba\" returns successfully" Apr 24 23:53:32.022156 systemd[1]: Started cri-containerd-49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99.scope - libcontainer container 49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99. Apr 24 23:53:32.060634 systemd-networkd[1402]: cali2882a89644c: Gained IPv6LL Apr 24 23:53:32.125817 systemd-networkd[1402]: calia8b739a6006: Gained IPv6LL Apr 24 23:53:32.219777 kernel: calico-node[4158]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:53:32.315873 systemd-networkd[1402]: cali4891e05be4d: Link UP Apr 24 23:53:32.319277 systemd-networkd[1402]: cali4891e05be4d: Gained carrier Apr 24 23:53:32.364703 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:32.573207 containerd[1469]: time="2026-04-24T23:53:32.561262363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:32.573207 containerd[1469]: time="2026-04-24T23:53:32.561387350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:32.573207 containerd[1469]: time="2026-04-24T23:53:32.561401346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:32.573207 containerd[1469]: time="2026-04-24T23:53:32.561638999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:30.657 [ERROR][4303] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:30.699 [INFO][4303] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--zlthf-eth0 coredns-7d764666f9- kube-system 707b52a6-210e-4b12-ba60-391f0fd35951 928 0 2026-04-24 23:52:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-zlthf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4891e05be4d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:30.706 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:30.920 [INFO][4336] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" HandleID="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:30.929 [INFO][4336] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" HandleID="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-zlthf", "timestamp":"2026-04-24 23:53:30.920834983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000449600)} Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:30.929 [INFO][4336] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.859 [INFO][4336] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.859 [INFO][4336] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.871 [INFO][4336] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.908 [INFO][4336] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.951 [INFO][4336] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.960 [INFO][4336] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.964 [INFO][4336] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.964 [INFO][4336] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.971 [INFO][4336] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:31.992 [INFO][4336] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:32.022 [INFO][4336] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:32.031 [INFO][4336] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" host="localhost" Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:32.031 [INFO][4336] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:32.688872 containerd[1469]: 2026-04-24 23:53:32.031 [INFO][4336] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" HandleID="k8s-pod-network.3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.698764 containerd[1469]: 2026-04-24 23:53:32.180 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--zlthf-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"707b52a6-210e-4b12-ba60-391f0fd35951", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-zlthf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4891e05be4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:32.698764 containerd[1469]: 2026-04-24 23:53:32.186 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.698764 containerd[1469]: 2026-04-24 23:53:32.187 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4891e05be4d ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.698764 containerd[1469]: 2026-04-24 23:53:32.364 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.698764 containerd[1469]: 2026-04-24 23:53:32.414 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--zlthf-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"707b52a6-210e-4b12-ba60-391f0fd35951", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d", Pod:"coredns-7d764666f9-zlthf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4891e05be4d", MAC:"be:e6:b5:4f:4d:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:32.698764 containerd[1469]: 2026-04-24 23:53:32.597 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d" Namespace="kube-system" Pod="coredns-7d764666f9-zlthf" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:32.701608 systemd[1]: Started cri-containerd-61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b.scope - libcontainer container 61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b. Apr 24 23:53:32.706638 containerd[1469]: time="2026-04-24T23:53:32.693845006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x29xn,Uid:98a32295-87c4-4c33-bd3a-7a5df06b2711,Namespace:calico-system,Attempt:1,} returns sandbox id \"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99\"" Apr 24 23:53:32.801435 systemd-networkd[1402]: cali46bd8470252: Link UP Apr 24 23:53:32.824029 systemd-networkd[1402]: calib5821818801: Gained IPv6LL Apr 24 23:53:32.873247 systemd-networkd[1402]: cali46bd8470252: Gained carrier Apr 24 23:53:32.879451 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:32.883391 containerd[1469]: time="2026-04-24T23:53:32.874136458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:32.883391 containerd[1469]: time="2026-04-24T23:53:32.874207704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:32.883391 containerd[1469]: time="2026-04-24T23:53:32.874219830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:32.883391 containerd[1469]: time="2026-04-24T23:53:32.875068290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:33.040072 systemd[1]: Started cri-containerd-3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d.scope - libcontainer container 3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d. Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:30.681 [ERROR][4204] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:30.757 [INFO][4204] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0 calico-apiserver-5c6597b7cd- calico-system 4ed747e3-6b68-48ad-8996-db64a1a66b08 923 0 2026-04-24 23:53:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c6597b7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c6597b7cd-t7h5t eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali46bd8470252 [] [] }} ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:30.757 [INFO][4204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:30.988 [INFO][4356] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" HandleID="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:31.047 [INFO][4356] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" HandleID="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5c6597b7cd-t7h5t", "timestamp":"2026-04-24 23:53:30.988141781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe840)} Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:31.047 [INFO][4356] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.036 [INFO][4356] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.047 [INFO][4356] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.386 [INFO][4356] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.601 [INFO][4356] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.645 [INFO][4356] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.654 [INFO][4356] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.658 [INFO][4356] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.658 [INFO][4356] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.673 [INFO][4356] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.738 [INFO][4356] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.764 [INFO][4356] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.765 [INFO][4356] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" host="localhost" Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.765 [INFO][4356] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:33.060352 containerd[1469]: 2026-04-24 23:53:32.765 [INFO][4356] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" HandleID="k8s-pod-network.6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.069046 containerd[1469]: 2026-04-24 23:53:32.770 [INFO][4204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"4ed747e3-6b68-48ad-8996-db64a1a66b08", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c6597b7cd-t7h5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali46bd8470252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:33.069046 containerd[1469]: 2026-04-24 23:53:32.771 [INFO][4204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.069046 containerd[1469]: 2026-04-24 23:53:32.773 [INFO][4204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46bd8470252 ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.069046 containerd[1469]: 2026-04-24 23:53:32.878 [INFO][4204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.069046 containerd[1469]: 2026-04-24 23:53:32.881 [INFO][4204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"4ed747e3-6b68-48ad-8996-db64a1a66b08", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff", Pod:"calico-apiserver-5c6597b7cd-t7h5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali46bd8470252", MAC:"ce:82:90:45:f5:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:33.069046 containerd[1469]: 2026-04-24 23:53:33.042 [INFO][4204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff" Namespace="calico-system" Pod="calico-apiserver-5c6597b7cd-t7h5t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:33.069046 containerd[1469]: time="2026-04-24T23:53:33.068980345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-2bsfc,Uid:98059560-e73c-45fa-bdb8-c35ea535eaa4,Namespace:calico-system,Attempt:1,} returns sandbox id \"61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b\"" Apr 24 23:53:33.092577 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:33.136501 kubelet[2518]: E0424 23:53:33.135294 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:33.148181 containerd[1469]: time="2026-04-24T23:53:33.145736734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:33.148181 containerd[1469]: time="2026-04-24T23:53:33.145840385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:33.148181 containerd[1469]: time="2026-04-24T23:53:33.145852715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:33.148181 containerd[1469]: time="2026-04-24T23:53:33.146009129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:33.234149 containerd[1469]: time="2026-04-24T23:53:33.234074207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zlthf,Uid:707b52a6-210e-4b12-ba60-391f0fd35951,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d\"" Apr 24 23:53:33.250771 kubelet[2518]: E0424 23:53:33.250683 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:33.257715 kubelet[2518]: I0424 23:53:33.256851 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cq94j" podStartSLOduration=39.256833333 podStartE2EDuration="39.256833333s" podCreationTimestamp="2026-04-24 23:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:53:33.248719485 +0000 UTC m=+44.312842792" watchObservedRunningTime="2026-04-24 23:53:33.256833333 +0000 UTC m=+44.320956655" Apr 24 23:53:33.266973 containerd[1469]: time="2026-04-24T23:53:33.266885467Z" level=info msg="CreateContainer within sandbox \"3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:53:33.295864 containerd[1469]: time="2026-04-24T23:53:33.295454895Z" level=info msg="CreateContainer within sandbox \"3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc9ac9c54b8e860d12278bc2e15126b724ebead701443c53df329c9288140069\"" Apr 24 23:53:33.300025 containerd[1469]: time="2026-04-24T23:53:33.299856676Z" level=info msg="StartContainer for \"cc9ac9c54b8e860d12278bc2e15126b724ebead701443c53df329c9288140069\"" Apr 24 23:53:33.308144 systemd[1]: Started cri-containerd-6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff.scope - libcontainer container 6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff. Apr 24 23:53:33.371886 systemd[1]: Started cri-containerd-cc9ac9c54b8e860d12278bc2e15126b724ebead701443c53df329c9288140069.scope - libcontainer container cc9ac9c54b8e860d12278bc2e15126b724ebead701443c53df329c9288140069. Apr 24 23:53:33.382024 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:33.441999 containerd[1469]: time="2026-04-24T23:53:33.441858094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6597b7cd-t7h5t,Uid:4ed747e3-6b68-48ad-8996-db64a1a66b08,Namespace:calico-system,Attempt:1,} returns sandbox id \"6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff\"" Apr 24 23:53:33.519734 containerd[1469]: time="2026-04-24T23:53:33.519521761Z" level=info msg="StartContainer for \"cc9ac9c54b8e860d12278bc2e15126b724ebead701443c53df329c9288140069\" returns successfully" Apr 24 23:53:33.532703 systemd-networkd[1402]: cali7d909ce8779: Gained IPv6LL Apr 24 23:53:33.534114 systemd-networkd[1402]: calic4fc1cf0e70: Gained IPv6LL Apr 24 23:53:33.534273 systemd-networkd[1402]: cali0ae56800fa3: Link UP Apr 24 23:53:33.538202 systemd-networkd[1402]: cali0ae56800fa3: Gained carrier Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:32.650 [INFO][4630] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7655d7cd55--pzfck-eth0 whisker-7655d7cd55- calico-system 855fc80f-32f0-41b1-863d-3c08005010f1 991 0 2026-04-24 23:53:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7655d7cd55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7655d7cd55-pzfck eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0ae56800fa3 [] [] }} ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:32.668 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.145 [INFO][4713] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" HandleID="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Workload="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.244 [INFO][4713] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" HandleID="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Workload="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012dd70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7655d7cd55-pzfck", "timestamp":"2026-04-24 23:53:33.145340492 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003bb4a0)} Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.262 [INFO][4713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.262 [INFO][4713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.262 [INFO][4713] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.307 [INFO][4713] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.331 [INFO][4713] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.345 [INFO][4713] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.354 [INFO][4713] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.379 [INFO][4713] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.380 [INFO][4713] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.387 [INFO][4713] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18 Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.403 [INFO][4713] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.440 [INFO][4713] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.442 [INFO][4713] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" host="localhost" Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.442 [INFO][4713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:33.578126 containerd[1469]: 2026-04-24 23:53:33.442 [INFO][4713] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" HandleID="k8s-pod-network.a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Workload="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.579837 containerd[1469]: 2026-04-24 23:53:33.517 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7655d7cd55--pzfck-eth0", GenerateName:"whisker-7655d7cd55-", Namespace:"calico-system", SelfLink:"", UID:"855fc80f-32f0-41b1-863d-3c08005010f1", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7655d7cd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7655d7cd55-pzfck", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0ae56800fa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:33.579837 containerd[1469]: 2026-04-24 23:53:33.518 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.579837 containerd[1469]: 2026-04-24 23:53:33.518 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ae56800fa3 ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.579837 containerd[1469]: 2026-04-24 23:53:33.539 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.579837 containerd[1469]: 2026-04-24 23:53:33.542 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7655d7cd55--pzfck-eth0", GenerateName:"whisker-7655d7cd55-", Namespace:"calico-system", SelfLink:"", UID:"855fc80f-32f0-41b1-863d-3c08005010f1", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7655d7cd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18", Pod:"whisker-7655d7cd55-pzfck", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0ae56800fa3", MAC:"9e:85:36:8a:7e:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:33.579837 containerd[1469]: 2026-04-24 23:53:33.570 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18" Namespace="calico-system" Pod="whisker-7655d7cd55-pzfck" WorkloadEndpoint="localhost-k8s-whisker--7655d7cd55--pzfck-eth0" Apr 24 23:53:33.610505 containerd[1469]: time="2026-04-24T23:53:33.609224906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:53:33.610505 containerd[1469]: time="2026-04-24T23:53:33.609284240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:53:33.610505 containerd[1469]: time="2026-04-24T23:53:33.609295819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:33.610505 containerd[1469]: time="2026-04-24T23:53:33.609357373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:53:33.669131 systemd[1]: Started cri-containerd-a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18.scope - libcontainer container a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18. Apr 24 23:53:33.688414 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:53:33.713724 systemd-networkd[1402]: vxlan.calico: Link UP Apr 24 23:53:33.713732 systemd-networkd[1402]: vxlan.calico: Gained carrier Apr 24 23:53:33.742848 containerd[1469]: time="2026-04-24T23:53:33.742109709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7655d7cd55-pzfck,Uid:855fc80f-32f0-41b1-863d-3c08005010f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18\"" Apr 24 23:53:34.138614 kubelet[2518]: E0424 23:53:34.138353 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:34.147063 kubelet[2518]: E0424 23:53:34.146983 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:34.160167 kubelet[2518]: I0424 23:53:34.158619 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-zlthf" podStartSLOduration=40.158605931 podStartE2EDuration="40.158605931s" podCreationTimestamp="2026-04-24 23:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:53:34.158357415 +0000 UTC m=+45.222480737" watchObservedRunningTime="2026-04-24 23:53:34.158605931 +0000 UTC m=+45.222729288" Apr 24 23:53:34.170061 systemd-networkd[1402]: cali4891e05be4d: Gained IPv6LL Apr 24 23:53:34.511193 systemd-networkd[1402]: cali46bd8470252: Gained IPv6LL Apr 24 23:53:34.934359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343978413.mount: Deactivated successfully. Apr 24 23:53:35.153328 kubelet[2518]: E0424 23:53:35.153148 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:35.194530 systemd-networkd[1402]: cali0ae56800fa3: Gained IPv6LL Apr 24 23:53:35.313371 containerd[1469]: time="2026-04-24T23:53:35.313104378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:35.314947 containerd[1469]: time="2026-04-24T23:53:35.314147050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:53:35.314947 containerd[1469]: time="2026-04-24T23:53:35.314765241Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:35.314897 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:48898.service - OpenSSH per-connection server daemon (10.0.0.1:48898). Apr 24 23:53:35.317643 containerd[1469]: time="2026-04-24T23:53:35.317507044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:35.318416 containerd[1469]: time="2026-04-24T23:53:35.318385925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.537911269s" Apr 24 23:53:35.318457 containerd[1469]: time="2026-04-24T23:53:35.318425848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:53:35.319409 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Apr 24 23:53:35.322763 containerd[1469]: time="2026-04-24T23:53:35.322709549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:53:35.364359 containerd[1469]: time="2026-04-24T23:53:35.363234483Z" level=info msg="CreateContainer within sandbox \"fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:53:35.390323 containerd[1469]: time="2026-04-24T23:53:35.390256754Z" level=info msg="CreateContainer within sandbox \"fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f\"" Apr 24 23:53:35.392140 containerd[1469]: time="2026-04-24T23:53:35.392089630Z" level=info msg="StartContainer for \"f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f\"" Apr 24 23:53:35.403913 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 48898 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:35.408071 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:35.423661 systemd-logind[1458]: New session 9 of user core. Apr 24 23:53:35.430869 systemd[1]: run-containerd-runc-k8s.io-f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f-runc.vmb2f3.mount: Deactivated successfully. Apr 24 23:53:35.440121 systemd[1]: Started cri-containerd-f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f.scope - libcontainer container f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f. Apr 24 23:53:35.450016 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:53:35.544171 containerd[1469]: time="2026-04-24T23:53:35.537433372Z" level=info msg="StartContainer for \"f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f\" returns successfully" Apr 24 23:53:35.708396 sshd[5033]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:35.712880 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:48898.service: Deactivated successfully. Apr 24 23:53:35.715608 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:53:35.716431 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:53:35.726309 systemd-logind[1458]: Removed session 9. Apr 24 23:53:36.177393 kubelet[2518]: E0424 23:53:36.176718 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:36.204657 kubelet[2518]: I0424 23:53:36.204501 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-xjf5d" podStartSLOduration=27.662062187 podStartE2EDuration="31.204480858s" podCreationTimestamp="2026-04-24 23:53:05 +0000 UTC" firstStartedPulling="2026-04-24 23:53:31.779132903 +0000 UTC m=+42.843256210" lastFinishedPulling="2026-04-24 23:53:35.321551574 +0000 UTC m=+46.385674881" observedRunningTime="2026-04-24 23:53:36.202070127 +0000 UTC m=+47.266193439" watchObservedRunningTime="2026-04-24 23:53:36.204480858 +0000 UTC m=+47.268604177" Apr 24 23:53:37.208647 systemd[1]: run-containerd-runc-k8s.io-f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f-runc.8FYaJB.mount: Deactivated successfully. Apr 24 23:53:38.793288 containerd[1469]: time="2026-04-24T23:53:38.792883634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:38.795700 containerd[1469]: time="2026-04-24T23:53:38.793290270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:53:38.815494 containerd[1469]: time="2026-04-24T23:53:38.815434927Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:38.868524 containerd[1469]: time="2026-04-24T23:53:38.867510545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:38.872105 containerd[1469]: time="2026-04-24T23:53:38.871773198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.549000245s" Apr 24 23:53:38.872105 containerd[1469]: time="2026-04-24T23:53:38.871990060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:53:38.879712 containerd[1469]: time="2026-04-24T23:53:38.879444065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:53:38.912683 containerd[1469]: time="2026-04-24T23:53:38.912335732Z" level=info msg="CreateContainer within sandbox \"9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:53:38.978668 containerd[1469]: time="2026-04-24T23:53:38.978334445Z" level=info msg="CreateContainer within sandbox \"9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7\"" Apr 24 23:53:38.980953 containerd[1469]: time="2026-04-24T23:53:38.980865959Z" level=info msg="StartContainer for \"a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7\"" Apr 24 23:53:39.046463 systemd[1]: Started cri-containerd-a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7.scope - libcontainer container a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7. Apr 24 23:53:39.100313 containerd[1469]: time="2026-04-24T23:53:39.100232648Z" level=info msg="StartContainer for \"a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7\" returns successfully" Apr 24 23:53:39.292023 kubelet[2518]: I0424 23:53:39.291829 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64c5b888c4-bkhcl" podStartSLOduration=26.380126447 podStartE2EDuration="33.291814077s" podCreationTimestamp="2026-04-24 23:53:06 +0000 UTC" firstStartedPulling="2026-04-24 23:53:31.965761796 +0000 UTC m=+43.029885103" lastFinishedPulling="2026-04-24 23:53:38.877449425 +0000 UTC m=+49.941572733" observedRunningTime="2026-04-24 23:53:39.291308873 +0000 UTC m=+50.355432195" watchObservedRunningTime="2026-04-24 23:53:39.291814077 +0000 UTC m=+50.355937399" Apr 24 23:53:39.307980 systemd[1]: run-containerd-runc-k8s.io-a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7-runc.7U6KGg.mount: Deactivated successfully. Apr 24 23:53:40.286820 systemd[1]: run-containerd-runc-k8s.io-a69c8f30d7cfc4ad781ecdca5ab570c6cbaef5fec0c289fa30d9f133579ca0a7-runc.o24CFB.mount: Deactivated successfully. Apr 24 23:53:40.750104 systemd[1]: Started sshd@9-10.0.0.107:22-10.0.0.1:44662.service - OpenSSH per-connection server daemon (10.0.0.1:44662). Apr 24 23:53:40.866345 sshd[5246]: Accepted publickey for core from 10.0.0.1 port 44662 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:40.868558 sshd[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:40.883163 systemd-logind[1458]: New session 10 of user core. Apr 24 23:53:40.890779 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:53:40.927227 containerd[1469]: time="2026-04-24T23:53:40.926800969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:40.929871 containerd[1469]: time="2026-04-24T23:53:40.927416257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:53:40.929871 containerd[1469]: time="2026-04-24T23:53:40.928413859Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:40.935960 containerd[1469]: time="2026-04-24T23:53:40.935662395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:40.936848 containerd[1469]: time="2026-04-24T23:53:40.936793518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.057183253s" Apr 24 23:53:40.936960 containerd[1469]: time="2026-04-24T23:53:40.936846982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:53:40.941680 containerd[1469]: time="2026-04-24T23:53:40.941652872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:53:40.965014 containerd[1469]: time="2026-04-24T23:53:40.964527006Z" level=info msg="CreateContainer within sandbox \"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:53:41.010273 containerd[1469]: time="2026-04-24T23:53:41.009215411Z" level=info msg="CreateContainer within sandbox \"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d8da49a9bcd17e3b02c50815768cf43deb876bea45c08642bfd0fe8b2c5591f0\"" Apr 24 23:53:41.030160 containerd[1469]: time="2026-04-24T23:53:41.029446084Z" level=info msg="StartContainer for \"d8da49a9bcd17e3b02c50815768cf43deb876bea45c08642bfd0fe8b2c5591f0\"" Apr 24 23:53:41.196090 systemd[1]: Started cri-containerd-d8da49a9bcd17e3b02c50815768cf43deb876bea45c08642bfd0fe8b2c5591f0.scope - libcontainer container d8da49a9bcd17e3b02c50815768cf43deb876bea45c08642bfd0fe8b2c5591f0. Apr 24 23:53:41.283212 containerd[1469]: time="2026-04-24T23:53:41.282331688Z" level=info msg="StartContainer for \"d8da49a9bcd17e3b02c50815768cf43deb876bea45c08642bfd0fe8b2c5591f0\" returns successfully" Apr 24 23:53:41.306792 sshd[5246]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:41.313436 systemd[1]: sshd@9-10.0.0.107:22-10.0.0.1:44662.service: Deactivated successfully. Apr 24 23:53:41.316162 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:53:41.316774 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:53:41.318998 systemd-logind[1458]: Removed session 10. Apr 24 23:53:45.408328 containerd[1469]: time="2026-04-24T23:53:45.408009025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:45.411292 containerd[1469]: time="2026-04-24T23:53:45.408441781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:53:45.411292 containerd[1469]: time="2026-04-24T23:53:45.410484987Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:45.414659 containerd[1469]: time="2026-04-24T23:53:45.414613710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:45.420347 containerd[1469]: time="2026-04-24T23:53:45.420298638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.478490734s" Apr 24 23:53:45.420347 containerd[1469]: time="2026-04-24T23:53:45.420345972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:53:45.463535 containerd[1469]: time="2026-04-24T23:53:45.460521720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:53:45.515820 containerd[1469]: time="2026-04-24T23:53:45.515772428Z" level=info msg="CreateContainer within sandbox \"61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:53:45.544898 containerd[1469]: time="2026-04-24T23:53:45.543417467Z" level=info msg="CreateContainer within sandbox \"61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc35cf848bf15afa4cc5902a25faba208d290566c1a7eb944eb48bc8d2309820\"" Apr 24 23:53:45.550379 containerd[1469]: time="2026-04-24T23:53:45.550052392Z" level=info msg="StartContainer for \"cc35cf848bf15afa4cc5902a25faba208d290566c1a7eb944eb48bc8d2309820\"" Apr 24 23:53:45.805733 systemd[1]: Started cri-containerd-cc35cf848bf15afa4cc5902a25faba208d290566c1a7eb944eb48bc8d2309820.scope - libcontainer container cc35cf848bf15afa4cc5902a25faba208d290566c1a7eb944eb48bc8d2309820. Apr 24 23:53:45.871626 containerd[1469]: time="2026-04-24T23:53:45.871538106Z" level=info msg="StartContainer for \"cc35cf848bf15afa4cc5902a25faba208d290566c1a7eb944eb48bc8d2309820\" returns successfully" Apr 24 23:53:46.043743 containerd[1469]: time="2026-04-24T23:53:46.041224414Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:46.050487 containerd[1469]: time="2026-04-24T23:53:46.048329716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:53:46.051842 containerd[1469]: time="2026-04-24T23:53:46.051786710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 590.614125ms" Apr 24 23:53:46.052003 containerd[1469]: time="2026-04-24T23:53:46.051841622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:53:46.069200 containerd[1469]: time="2026-04-24T23:53:46.065390404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:53:46.087766 containerd[1469]: time="2026-04-24T23:53:46.087692820Z" level=info msg="CreateContainer within sandbox \"6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:53:46.144454 containerd[1469]: time="2026-04-24T23:53:46.144175325Z" level=info msg="CreateContainer within sandbox \"6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"544ea8250c8bea271c73fba0403fefcc3ed1393001df1eeda19185cfe36b4de3\"" Apr 24 23:53:46.147512 containerd[1469]: time="2026-04-24T23:53:46.147461441Z" level=info msg="StartContainer for \"544ea8250c8bea271c73fba0403fefcc3ed1393001df1eeda19185cfe36b4de3\"" Apr 24 23:53:46.215333 systemd[1]: Started cri-containerd-544ea8250c8bea271c73fba0403fefcc3ed1393001df1eeda19185cfe36b4de3.scope - libcontainer container 544ea8250c8bea271c73fba0403fefcc3ed1393001df1eeda19185cfe36b4de3. Apr 24 23:53:46.361439 systemd[1]: Started sshd@10-10.0.0.107:22-10.0.0.1:52884.service - OpenSSH per-connection server daemon (10.0.0.1:52884). Apr 24 23:53:46.395057 kubelet[2518]: I0424 23:53:46.394908 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5c6597b7cd-2bsfc" podStartSLOduration=29.017043777 podStartE2EDuration="41.394887456s" podCreationTimestamp="2026-04-24 23:53:05 +0000 UTC" firstStartedPulling="2026-04-24 23:53:33.07759435 +0000 UTC m=+44.141717656" lastFinishedPulling="2026-04-24 23:53:45.455438027 +0000 UTC m=+56.519561335" observedRunningTime="2026-04-24 23:53:46.394598005 +0000 UTC m=+57.458721312" watchObservedRunningTime="2026-04-24 23:53:46.394887456 +0000 UTC m=+57.459010763" Apr 24 23:53:46.414333 containerd[1469]: time="2026-04-24T23:53:46.414172154Z" level=info msg="StartContainer for \"544ea8250c8bea271c73fba0403fefcc3ed1393001df1eeda19185cfe36b4de3\" returns successfully" Apr 24 23:53:46.457808 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 52884 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:46.464281 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:46.473894 systemd-logind[1458]: New session 11 of user core. Apr 24 23:53:46.481026 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:53:46.941283 sshd[5375]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:46.946231 systemd[1]: sshd@10-10.0.0.107:22-10.0.0.1:52884.service: Deactivated successfully. Apr 24 23:53:47.020656 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:53:47.044544 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:53:47.047638 systemd-logind[1458]: Removed session 11. Apr 24 23:53:48.558231 containerd[1469]: time="2026-04-24T23:53:48.557464467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:48.560112 containerd[1469]: time="2026-04-24T23:53:48.560042425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:53:48.561649 containerd[1469]: time="2026-04-24T23:53:48.561588654Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:48.565298 containerd[1469]: time="2026-04-24T23:53:48.564823144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:48.566326 containerd[1469]: time="2026-04-24T23:53:48.566276511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.500792377s" Apr 24 23:53:48.566326 containerd[1469]: time="2026-04-24T23:53:48.566318837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:53:48.575461 containerd[1469]: time="2026-04-24T23:53:48.574807341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:53:48.604390 containerd[1469]: time="2026-04-24T23:53:48.604139424Z" level=info msg="CreateContainer within sandbox \"a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:53:48.668588 containerd[1469]: time="2026-04-24T23:53:48.667820820Z" level=info msg="CreateContainer within sandbox \"a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9c35cc9808f979c98232fbda941561d4990e5efd7b613a3c90f056c579cdfc5f\"" Apr 24 23:53:48.677721 containerd[1469]: time="2026-04-24T23:53:48.677472070Z" level=info msg="StartContainer for \"9c35cc9808f979c98232fbda941561d4990e5efd7b613a3c90f056c579cdfc5f\"" Apr 24 23:53:48.808358 systemd[1]: Started cri-containerd-9c35cc9808f979c98232fbda941561d4990e5efd7b613a3c90f056c579cdfc5f.scope - libcontainer container 9c35cc9808f979c98232fbda941561d4990e5efd7b613a3c90f056c579cdfc5f. Apr 24 23:53:49.075841 containerd[1469]: time="2026-04-24T23:53:49.075259050Z" level=info msg="StartContainer for \"9c35cc9808f979c98232fbda941561d4990e5efd7b613a3c90f056c579cdfc5f\" returns successfully" Apr 24 23:53:49.229470 containerd[1469]: time="2026-04-24T23:53:49.229375537Z" level=info msg="StopPodSandbox for \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\"" Apr 24 23:53:49.855258 kubelet[2518]: I0424 23:53:49.851915 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5c6597b7cd-t7h5t" podStartSLOduration=32.270482269 podStartE2EDuration="44.851854489s" podCreationTimestamp="2026-04-24 23:53:05 +0000 UTC" firstStartedPulling="2026-04-24 23:53:33.479766704 +0000 UTC m=+44.543890010" lastFinishedPulling="2026-04-24 23:53:46.061138919 +0000 UTC m=+57.125262230" observedRunningTime="2026-04-24 23:53:47.422599741 +0000 UTC m=+58.486723060" watchObservedRunningTime="2026-04-24 23:53:49.851854489 +0000 UTC m=+60.915977810" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:49.504 [WARNING][5469] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ccd9b5aa-d456-4c30-851f-fc449a59f911", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf", Pod:"goldmane-9f7667bb8-xjf5d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8b739a6006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:49.506 [INFO][5469] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:49.507 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" iface="eth0" netns="" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:49.507 [INFO][5469] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:49.507 [INFO][5469] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.093 [INFO][5477] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.098 [INFO][5477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.099 [INFO][5477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.133 [WARNING][5477] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.136 [INFO][5477] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.162 [INFO][5477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:50.185168 containerd[1469]: 2026-04-24 23:53:50.174 [INFO][5469] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.207237 containerd[1469]: time="2026-04-24T23:53:50.206877360Z" level=info msg="TearDown network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\" successfully" Apr 24 23:53:50.207237 containerd[1469]: time="2026-04-24T23:53:50.207191958Z" level=info msg="StopPodSandbox for \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\" returns successfully" Apr 24 23:53:50.305734 containerd[1469]: time="2026-04-24T23:53:50.305173693Z" level=info msg="RemovePodSandbox for \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\"" Apr 24 23:53:50.312028 containerd[1469]: time="2026-04-24T23:53:50.311865670Z" level=info msg="Forcibly stopping sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\"" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.443 [WARNING][5503] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ccd9b5aa-d456-4c30-851f-fc449a59f911", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd875abd49d2e79d3ad49116dedd52deb534b40dd407aa1b2c857835c019f6bf", Pod:"goldmane-9f7667bb8-xjf5d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8b739a6006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.444 [INFO][5503] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.444 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" iface="eth0" netns="" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.444 [INFO][5503] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.444 [INFO][5503] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.572 [INFO][5512] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.573 [INFO][5512] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.573 [INFO][5512] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.618 [WARNING][5512] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.618 [INFO][5512] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" HandleID="k8s-pod-network.1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Workload="localhost-k8s-goldmane--9f7667bb8--xjf5d-eth0" Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.636 [INFO][5512] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:50.650214 containerd[1469]: 2026-04-24 23:53:50.640 [INFO][5503] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd" Apr 24 23:53:50.689108 containerd[1469]: time="2026-04-24T23:53:50.650161245Z" level=info msg="TearDown network for sandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\" successfully" Apr 24 23:53:50.703624 containerd[1469]: time="2026-04-24T23:53:50.703446001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:50.704431 containerd[1469]: time="2026-04-24T23:53:50.703897863Z" level=info msg="RemovePodSandbox \"1d17b5a1fc1e82f2986e0d28309502c232b797db4c74c112bdff5214d7fd40cd\" returns successfully" Apr 24 23:53:50.713124 containerd[1469]: time="2026-04-24T23:53:50.713034895Z" level=info msg="StopPodSandbox for \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\"" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:50.943 [WARNING][5531] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0", GenerateName:"calico-kube-controllers-64c5b888c4-", Namespace:"calico-system", SelfLink:"", UID:"db8622af-35ea-4a80-8494-1d6f8141a50e", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c5b888c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172", Pod:"calico-kube-controllers-64c5b888c4-bkhcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2882a89644c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:50.945 [INFO][5531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:50.945 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" iface="eth0" netns="" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:50.945 [INFO][5531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:50.945 [INFO][5531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.186 [INFO][5542] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.187 [INFO][5542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.187 [INFO][5542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.205 [WARNING][5542] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.213 [INFO][5542] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.230 [INFO][5542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:51.248988 containerd[1469]: 2026-04-24 23:53:51.238 [INFO][5531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.248988 containerd[1469]: time="2026-04-24T23:53:51.248283401Z" level=info msg="TearDown network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\" successfully" Apr 24 23:53:51.248988 containerd[1469]: time="2026-04-24T23:53:51.248386039Z" level=info msg="StopPodSandbox for \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\" returns successfully" Apr 24 23:53:51.287424 containerd[1469]: time="2026-04-24T23:53:51.286104369Z" level=info msg="RemovePodSandbox for \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\"" Apr 24 23:53:51.287424 containerd[1469]: time="2026-04-24T23:53:51.286531052Z" level=info msg="Forcibly stopping sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\"" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.562 [WARNING][5560] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0", GenerateName:"calico-kube-controllers-64c5b888c4-", Namespace:"calico-system", SelfLink:"", UID:"db8622af-35ea-4a80-8494-1d6f8141a50e", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c5b888c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b3474e40c3c2e71e31f74d3ead931c2d7811c117c7a6ed4626dd41c39d0e172", Pod:"calico-kube-controllers-64c5b888c4-bkhcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2882a89644c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.565 [INFO][5560] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.565 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" iface="eth0" netns="" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.565 [INFO][5560] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.565 [INFO][5560] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.735 [INFO][5568] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.735 [INFO][5568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.735 [INFO][5568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.746 [WARNING][5568] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.748 [INFO][5568] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" HandleID="k8s-pod-network.6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Workload="localhost-k8s-calico--kube--controllers--64c5b888c4--bkhcl-eth0" Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.758 [INFO][5568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:51.766700 containerd[1469]: 2026-04-24 23:53:51.762 [INFO][5560] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867" Apr 24 23:53:51.769804 containerd[1469]: time="2026-04-24T23:53:51.768276426Z" level=info msg="TearDown network for sandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\" successfully" Apr 24 23:53:51.776497 containerd[1469]: time="2026-04-24T23:53:51.776264559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:51.777064 containerd[1469]: time="2026-04-24T23:53:51.776598868Z" level=info msg="RemovePodSandbox \"6cab0ad29d1223a20043cb5dcd0dd86a4e9800a1e848d4039546e93fcf208867\" returns successfully" Apr 24 23:53:51.780091 containerd[1469]: time="2026-04-24T23:53:51.780051214Z" level=info msg="StopPodSandbox for \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\"" Apr 24 23:53:52.000806 systemd[1]: Started sshd@11-10.0.0.107:22-10.0.0.1:52894.service - OpenSSH per-connection server daemon (10.0.0.1:52894). Apr 24 23:53:52.399117 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 52894 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:52.406101 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.155 [WARNING][5585] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" WorkloadEndpoint="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.157 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.157 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" iface="eth0" netns="" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.157 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.157 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.312 [INFO][5595] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.313 [INFO][5595] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.314 [INFO][5595] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.387 [WARNING][5595] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.387 [INFO][5595] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.405 [INFO][5595] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:52.416245 containerd[1469]: 2026-04-24 23:53:52.410 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:52.417879 containerd[1469]: time="2026-04-24T23:53:52.416717256Z" level=info msg="TearDown network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\" successfully" Apr 24 23:53:52.417879 containerd[1469]: time="2026-04-24T23:53:52.416872136Z" level=info msg="StopPodSandbox for \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\" returns successfully" Apr 24 23:53:52.425444 containerd[1469]: time="2026-04-24T23:53:52.425085513Z" level=info msg="RemovePodSandbox for \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\"" Apr 24 23:53:52.426150 containerd[1469]: time="2026-04-24T23:53:52.425535430Z" level=info msg="Forcibly stopping sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\"" Apr 24 23:53:52.433127 systemd-logind[1458]: New session 12 of user core. Apr 24 23:53:52.443743 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.681 [WARNING][5614] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" WorkloadEndpoint="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.684 [INFO][5614] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.685 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" iface="eth0" netns="" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.698 [INFO][5614] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.698 [INFO][5614] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.941 [INFO][5632] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.944 [INFO][5632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:52.946 [INFO][5632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:53.106 [WARNING][5632] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:53.115 [INFO][5632] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" HandleID="k8s-pod-network.8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Workload="localhost-k8s-whisker--85894b7686--49tvq-eth0" Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:53.205 [INFO][5632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:53.288466 containerd[1469]: 2026-04-24 23:53:53.265 [INFO][5614] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b" Apr 24 23:53:53.289570 containerd[1469]: time="2026-04-24T23:53:53.288921043Z" level=info msg="TearDown network for sandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\" successfully" Apr 24 23:53:53.421416 containerd[1469]: time="2026-04-24T23:53:53.421313469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:53.424055 containerd[1469]: time="2026-04-24T23:53:53.421560353Z" level=info msg="RemovePodSandbox \"8f1752a8ab6786b2c71a17965e2ac530734bf44c35f0395bbe44bb87e0efc84b\" returns successfully" Apr 24 23:53:53.432611 sshd[5593]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:53.436036 containerd[1469]: time="2026-04-24T23:53:53.429697986Z" level=info msg="StopPodSandbox for \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\"" Apr 24 23:53:53.447341 systemd[1]: Started sshd@12-10.0.0.107:22-10.0.0.1:52910.service - OpenSSH per-connection server daemon (10.0.0.1:52910). Apr 24 23:53:53.448755 systemd[1]: sshd@11-10.0.0.107:22-10.0.0.1:52894.service: Deactivated successfully. Apr 24 23:53:53.458019 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:53:53.461694 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:53:53.474338 systemd-logind[1458]: Removed session 12. Apr 24 23:53:53.704397 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 52910 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:53.708901 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:53.720070 systemd-logind[1458]: New session 13 of user core. Apr 24 23:53:53.724510 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.736 [WARNING][5659] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--zlthf-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"707b52a6-210e-4b12-ba60-391f0fd35951", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d", Pod:"coredns-7d764666f9-zlthf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4891e05be4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.737 [INFO][5659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.737 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" iface="eth0" netns="" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.737 [INFO][5659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.738 [INFO][5659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.881 [INFO][5670] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.888 [INFO][5670] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.888 [INFO][5670] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.897 [WARNING][5670] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.897 [INFO][5670] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.899 [INFO][5670] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:53.904881 containerd[1469]: 2026-04-24 23:53:53.901 [INFO][5659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:53.907246 containerd[1469]: time="2026-04-24T23:53:53.905514269Z" level=info msg="TearDown network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\" successfully" Apr 24 23:53:53.907246 containerd[1469]: time="2026-04-24T23:53:53.905732727Z" level=info msg="StopPodSandbox for \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\" returns successfully" Apr 24 23:53:53.910605 containerd[1469]: time="2026-04-24T23:53:53.910548757Z" level=info msg="RemovePodSandbox for \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\"" Apr 24 23:53:53.910699 containerd[1469]: time="2026-04-24T23:53:53.910665284Z" level=info msg="Forcibly stopping sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\"" Apr 24 23:53:54.215482 sshd[5646]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:54.237685 containerd[1469]: time="2026-04-24T23:53:54.237423035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:54.246229 containerd[1469]: time="2026-04-24T23:53:54.245054244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:53:54.247094 systemd[1]: Started sshd@13-10.0.0.107:22-10.0.0.1:52912.service - OpenSSH per-connection server daemon (10.0.0.1:52912). Apr 24 23:53:54.250626 systemd[1]: sshd@12-10.0.0.107:22-10.0.0.1:52910.service: Deactivated successfully. Apr 24 23:53:54.265553 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:53:54.290261 containerd[1469]: time="2026-04-24T23:53:54.270601040Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:54.302626 containerd[1469]: time="2026-04-24T23:53:54.302574738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:54.305358 containerd[1469]: time="2026-04-24T23:53:54.303825544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 5.725563703s" Apr 24 23:53:54.305358 containerd[1469]: time="2026-04-24T23:53:54.304951292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:53:54.310247 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:53:54.339598 systemd-logind[1458]: Removed session 13. Apr 24 23:53:54.390162 containerd[1469]: time="2026-04-24T23:53:54.390089251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:53:54.526876 containerd[1469]: time="2026-04-24T23:53:54.526440690Z" level=info msg="CreateContainer within sandbox \"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:53:54.606555 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 52912 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:53:54.614994 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:53:54.763718 systemd-logind[1458]: New session 14 of user core. Apr 24 23:53:54.766775 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:53:54.821397 containerd[1469]: time="2026-04-24T23:53:54.820475896Z" level=info msg="CreateContainer within sandbox \"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b01948470017e3b925466e433ca25fb9101fc4f2d32d08a9259138f5a45dc8f\"" Apr 24 23:53:54.839208 containerd[1469]: time="2026-04-24T23:53:54.838100635Z" level=info msg="StartContainer for \"9b01948470017e3b925466e433ca25fb9101fc4f2d32d08a9259138f5a45dc8f\"" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.139 [WARNING][5694] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--zlthf-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"707b52a6-210e-4b12-ba60-391f0fd35951", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a93d388ba7e586858cbaa14db6de9d0a858cef205024525c1eb6704235ab36d", Pod:"coredns-7d764666f9-zlthf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4891e05be4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.154 [INFO][5694] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.154 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" iface="eth0" netns="" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.154 [INFO][5694] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.154 [INFO][5694] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.770 [INFO][5703] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.776 [INFO][5703] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.777 [INFO][5703] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.820 [WARNING][5703] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.820 [INFO][5703] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" HandleID="k8s-pod-network.5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Workload="localhost-k8s-coredns--7d764666f9--zlthf-eth0" Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.904 [INFO][5703] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:54.987056 containerd[1469]: 2026-04-24 23:53:54.919 [INFO][5694] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a" Apr 24 23:53:54.994840 containerd[1469]: time="2026-04-24T23:53:54.994280699Z" level=info msg="TearDown network for sandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\" successfully" Apr 24 23:53:55.126319 systemd[1]: Started cri-containerd-9b01948470017e3b925466e433ca25fb9101fc4f2d32d08a9259138f5a45dc8f.scope - libcontainer container 9b01948470017e3b925466e433ca25fb9101fc4f2d32d08a9259138f5a45dc8f. Apr 24 23:53:55.145789 containerd[1469]: time="2026-04-24T23:53:55.145453355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:55.147258 containerd[1469]: time="2026-04-24T23:53:55.146014641Z" level=info msg="RemovePodSandbox \"5d0a5d840ec82099405811423d7184d65a098fe913c97b3d21ba8859b13d4f3a\" returns successfully" Apr 24 23:53:55.163007 containerd[1469]: time="2026-04-24T23:53:55.161946623Z" level=info msg="StopPodSandbox for \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\"" Apr 24 23:53:55.537896 sshd[5712]: pam_unix(sshd:session): session closed for user core Apr 24 23:53:55.614802 systemd[1]: sshd@13-10.0.0.107:22-10.0.0.1:52912.service: Deactivated successfully. Apr 24 23:53:55.627790 containerd[1469]: time="2026-04-24T23:53:55.626782524Z" level=info msg="StartContainer for \"9b01948470017e3b925466e433ca25fb9101fc4f2d32d08a9259138f5a45dc8f\" returns successfully" Apr 24 23:53:55.644543 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:53:55.667579 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:53:55.669535 systemd-logind[1458]: Removed session 14. Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.736 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x29xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98a32295-87c4-4c33-bd3a-7a5df06b2711", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99", Pod:"csi-node-driver-x29xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4fc1cf0e70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.794 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.794 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" iface="eth0" netns="" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.795 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.795 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.861 [INFO][5801] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.862 [INFO][5801] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.862 [INFO][5801] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.881 [WARNING][5801] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.881 [INFO][5801] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.886 [INFO][5801] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:55.891506 containerd[1469]: 2026-04-24 23:53:55.888 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:55.894302 containerd[1469]: time="2026-04-24T23:53:55.894142267Z" level=info msg="TearDown network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\" successfully" Apr 24 23:53:55.894395 containerd[1469]: time="2026-04-24T23:53:55.894333792Z" level=info msg="StopPodSandbox for \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\" returns successfully" Apr 24 23:53:55.896754 containerd[1469]: time="2026-04-24T23:53:55.896698921Z" level=info msg="RemovePodSandbox for \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\"" Apr 24 23:53:55.896835 containerd[1469]: time="2026-04-24T23:53:55.896811245Z" level=info msg="Forcibly stopping sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\"" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.143 [WARNING][5821] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x29xn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98a32295-87c4-4c33-bd3a-7a5df06b2711", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49aff2c09741a9669327ec28d73efb7fc1288b65fde7a3fab23b2e635474ff99", Pod:"csi-node-driver-x29xn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4fc1cf0e70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.145 [INFO][5821] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.145 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" iface="eth0" netns="" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.145 [INFO][5821] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.145 [INFO][5821] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.200 [INFO][5835] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.200 [INFO][5835] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.200 [INFO][5835] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.213 [WARNING][5835] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.214 [INFO][5835] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" HandleID="k8s-pod-network.8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Workload="localhost-k8s-csi--node--driver--x29xn-eth0" Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.222 [INFO][5835] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:56.228950 containerd[1469]: 2026-04-24 23:53:56.226 [INFO][5821] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0" Apr 24 23:53:56.230319 containerd[1469]: time="2026-04-24T23:53:56.230242814Z" level=info msg="TearDown network for sandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\" successfully" Apr 24 23:53:56.246358 containerd[1469]: time="2026-04-24T23:53:56.245714021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:56.247919 containerd[1469]: time="2026-04-24T23:53:56.247311329Z" level=info msg="RemovePodSandbox \"8fac5dd89fe0c3ee3823ebeab89a1c04f4d820b222af963a0eadf8df18f4dcb0\" returns successfully" Apr 24 23:53:56.250364 containerd[1469]: time="2026-04-24T23:53:56.250326920Z" level=info msg="StopPodSandbox for \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\"" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.329 [WARNING][5856] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"98059560-e73c-45fa-bdb8-c35ea535eaa4", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b", Pod:"calico-apiserver-5c6597b7cd-2bsfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7d909ce8779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.331 [INFO][5856] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.331 [INFO][5856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" iface="eth0" netns="" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.331 [INFO][5856] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.331 [INFO][5856] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.382 [INFO][5865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.383 [INFO][5865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.383 [INFO][5865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.392 [WARNING][5865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.392 [INFO][5865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.394 [INFO][5865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:56.398232 containerd[1469]: 2026-04-24 23:53:56.396 [INFO][5856] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.400231 containerd[1469]: time="2026-04-24T23:53:56.398313120Z" level=info msg="TearDown network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\" successfully" Apr 24 23:53:56.400231 containerd[1469]: time="2026-04-24T23:53:56.398367491Z" level=info msg="StopPodSandbox for \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\" returns successfully" Apr 24 23:53:56.405502 containerd[1469]: time="2026-04-24T23:53:56.405343165Z" level=info msg="RemovePodSandbox for \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\"" Apr 24 23:53:56.405502 containerd[1469]: time="2026-04-24T23:53:56.405505090Z" level=info msg="Forcibly stopping sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\"" Apr 24 23:53:56.607532 kubelet[2518]: I0424 23:53:56.602483 2518 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:53:56.613105 kubelet[2518]: I0424 23:53:56.611793 2518 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.552 [WARNING][5882] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"98059560-e73c-45fa-bdb8-c35ea535eaa4", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61832a7911ed8ca5d309d58b0551479318e4b6a35ecbdbcd40090fbdae07965b", Pod:"calico-apiserver-5c6597b7cd-2bsfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7d909ce8779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.556 [INFO][5882] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.556 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" iface="eth0" netns="" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.556 [INFO][5882] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.556 [INFO][5882] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.660 [INFO][5891] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.664 [INFO][5891] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.664 [INFO][5891] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.682 [WARNING][5891] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.682 [INFO][5891] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" HandleID="k8s-pod-network.5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--2bsfc-eth0" Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.708 [INFO][5891] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:56.727560 containerd[1469]: 2026-04-24 23:53:56.719 [INFO][5882] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8" Apr 24 23:53:56.727560 containerd[1469]: time="2026-04-24T23:53:56.727586530Z" level=info msg="TearDown network for sandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\" successfully" Apr 24 23:53:56.741009 containerd[1469]: time="2026-04-24T23:53:56.740915862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:56.741135 containerd[1469]: time="2026-04-24T23:53:56.741061058Z" level=info msg="RemovePodSandbox \"5f52aa1a79f14db65926246078470e6b892e23d88ccdc7055e420e7c97c381e8\" returns successfully" Apr 24 23:53:56.746016 containerd[1469]: time="2026-04-24T23:53:56.743790508Z" level=info msg="StopPodSandbox for \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\"" Apr 24 23:53:56.832279 kubelet[2518]: I0424 23:53:56.831892 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-x29xn" podStartSLOduration=29.265105952 podStartE2EDuration="50.831823991s" podCreationTimestamp="2026-04-24 23:53:06 +0000 UTC" firstStartedPulling="2026-04-24 23:53:32.782777845 +0000 UTC m=+43.846901152" lastFinishedPulling="2026-04-24 23:53:54.349495885 +0000 UTC m=+65.413619191" observedRunningTime="2026-04-24 23:53:56.831740068 +0000 UTC m=+67.895863394" watchObservedRunningTime="2026-04-24 23:53:56.831823991 +0000 UTC m=+67.895947315" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:56.927 [WARNING][5911] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"4ed747e3-6b68-48ad-8996-db64a1a66b08", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff", Pod:"calico-apiserver-5c6597b7cd-t7h5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali46bd8470252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:56.928 [INFO][5911] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:56.928 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" iface="eth0" netns="" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:56.928 [INFO][5911] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:56.928 [INFO][5911] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.100 [INFO][5919] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.101 [INFO][5919] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.101 [INFO][5919] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.130 [WARNING][5919] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.130 [INFO][5919] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.145 [INFO][5919] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:57.160817 containerd[1469]: 2026-04-24 23:53:57.154 [INFO][5911] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.162509 containerd[1469]: time="2026-04-24T23:53:57.161399974Z" level=info msg="TearDown network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\" successfully" Apr 24 23:53:57.162509 containerd[1469]: time="2026-04-24T23:53:57.161534056Z" level=info msg="StopPodSandbox for \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\" returns successfully" Apr 24 23:53:57.164404 containerd[1469]: time="2026-04-24T23:53:57.164350409Z" level=info msg="RemovePodSandbox for \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\"" Apr 24 23:53:57.164521 containerd[1469]: time="2026-04-24T23:53:57.164471821Z" level=info msg="Forcibly stopping sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\"" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.262 [WARNING][5940] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0", GenerateName:"calico-apiserver-5c6597b7cd-", Namespace:"calico-system", SelfLink:"", UID:"4ed747e3-6b68-48ad-8996-db64a1a66b08", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6597b7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6480a07ab44b216273f78182a1d978650972a800673d129063f9c9c17c97dcff", Pod:"calico-apiserver-5c6597b7cd-t7h5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali46bd8470252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.263 [INFO][5940] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.263 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" iface="eth0" netns="" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.264 [INFO][5940] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.264 [INFO][5940] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.390 [INFO][5948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.390 [INFO][5948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.390 [INFO][5948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.398 [WARNING][5948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.398 [INFO][5948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" HandleID="k8s-pod-network.56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Workload="localhost-k8s-calico--apiserver--5c6597b7cd--t7h5t-eth0" Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.407 [INFO][5948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:57.414391 containerd[1469]: 2026-04-24 23:53:57.409 [INFO][5940] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8" Apr 24 23:53:57.424606 containerd[1469]: time="2026-04-24T23:53:57.419034843Z" level=info msg="TearDown network for sandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\" successfully" Apr 24 23:53:57.441715 containerd[1469]: time="2026-04-24T23:53:57.441409113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:57.446825 containerd[1469]: time="2026-04-24T23:53:57.445758281Z" level=info msg="RemovePodSandbox \"56580dcdaf903ff929b5de920ff203ad47cdb3016fd1418a115242338d87faf8\" returns successfully" Apr 24 23:53:57.451950 containerd[1469]: time="2026-04-24T23:53:57.451897448Z" level=info msg="StopPodSandbox for \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\"" Apr 24 23:53:57.473215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692086043.mount: Deactivated successfully. Apr 24 23:53:57.553344 containerd[1469]: time="2026-04-24T23:53:57.529803396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:57.553344 containerd[1469]: time="2026-04-24T23:53:57.530036765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:53:57.603846 containerd[1469]: time="2026-04-24T23:53:57.596583035Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:57.624457 containerd[1469]: time="2026-04-24T23:53:57.605836714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:53:57.631477 containerd[1469]: time="2026-04-24T23:53:57.631194437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.237102831s" Apr 24 23:53:57.632266 containerd[1469]: time="2026-04-24T23:53:57.632211479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:53:57.738320 containerd[1469]: time="2026-04-24T23:53:57.737748556Z" level=info msg="CreateContainer within sandbox \"a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:53:57.844741 containerd[1469]: time="2026-04-24T23:53:57.843453672Z" level=info msg="CreateContainer within sandbox \"a71db1dc39a49a9dbdd73403ea8ba72620cd4010ec3f16affc10c2295fb1ab18\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"84988cd8153657fd290f1517494d0e1d7c482d43b7ab9c46c86ddafa53793134\"" Apr 24 23:53:57.850117 containerd[1469]: time="2026-04-24T23:53:57.849952130Z" level=info msg="StartContainer for \"84988cd8153657fd290f1517494d0e1d7c482d43b7ab9c46c86ddafa53793134\"" Apr 24 23:53:57.985053 systemd[1]: Started cri-containerd-84988cd8153657fd290f1517494d0e1d7c482d43b7ab9c46c86ddafa53793134.scope - libcontainer container 84988cd8153657fd290f1517494d0e1d7c482d43b7ab9c46c86ddafa53793134. Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:57.859 [WARNING][5968] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cq94j-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2654d15d-7c11-4173-8201-2dab36e1b04b", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0", Pod:"coredns-7d764666f9-cq94j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5821818801", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:57.860 [INFO][5968] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:57.860 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" iface="eth0" netns="" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:57.860 [INFO][5968] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:57.860 [INFO][5968] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.023 [INFO][5981] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.024 [INFO][5981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.024 [INFO][5981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.048 [WARNING][5981] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.048 [INFO][5981] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.083 [INFO][5981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:58.092465 containerd[1469]: 2026-04-24 23:53:58.087 [INFO][5968] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.093401 containerd[1469]: time="2026-04-24T23:53:58.092455074Z" level=info msg="TearDown network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\" successfully" Apr 24 23:53:58.093401 containerd[1469]: time="2026-04-24T23:53:58.092598150Z" level=info msg="StopPodSandbox for \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\" returns successfully" Apr 24 23:53:58.095721 containerd[1469]: time="2026-04-24T23:53:58.095688703Z" level=info msg="RemovePodSandbox for \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\"" Apr 24 23:53:58.095814 containerd[1469]: time="2026-04-24T23:53:58.095798984Z" level=info msg="Forcibly stopping sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\"" Apr 24 23:53:58.144399 containerd[1469]: time="2026-04-24T23:53:58.142588217Z" level=info msg="StartContainer for \"84988cd8153657fd290f1517494d0e1d7c482d43b7ab9c46c86ddafa53793134\" returns successfully" Apr 24 23:53:58.243476 kubelet[2518]: E0424 23:53:58.241743 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.243 [WARNING][6025] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cq94j-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2654d15d-7c11-4173-8201-2dab36e1b04b", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6a98d9db7ebaf71e5eeba794437857753fe395472c8b990733552a8950ed4d0", Pod:"coredns-7d764666f9-cq94j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5821818801", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.246 [INFO][6025] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.247 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" iface="eth0" netns="" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.247 [INFO][6025] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.247 [INFO][6025] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.392 [INFO][6045] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.394 [INFO][6045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.394 [INFO][6045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.409 [WARNING][6045] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.410 [INFO][6045] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" HandleID="k8s-pod-network.86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Workload="localhost-k8s-coredns--7d764666f9--cq94j-eth0" Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.415 [INFO][6045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:53:58.428329 containerd[1469]: 2026-04-24 23:53:58.423 [INFO][6025] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2" Apr 24 23:53:58.429228 containerd[1469]: time="2026-04-24T23:53:58.428291041Z" level=info msg="TearDown network for sandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\" successfully" Apr 24 23:53:58.437154 containerd[1469]: time="2026-04-24T23:53:58.436841892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:53:58.437716 containerd[1469]: time="2026-04-24T23:53:58.437298734Z" level=info msg="RemovePodSandbox \"86b924bd6ae534fecfc9a5b0f73b44af44dc501b77d379f8e23462f5719992d2\" returns successfully" Apr 24 23:53:58.866406 kubelet[2518]: I0424 23:53:58.866148 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7655d7cd55-pzfck" podStartSLOduration=3.910520305 podStartE2EDuration="27.866052352s" podCreationTimestamp="2026-04-24 23:53:31 +0000 UTC" firstStartedPulling="2026-04-24 23:53:33.750300636 +0000 UTC m=+44.814423945" lastFinishedPulling="2026-04-24 23:53:57.705832682 +0000 UTC m=+68.769955992" observedRunningTime="2026-04-24 23:53:58.86546176 +0000 UTC m=+69.929585083" watchObservedRunningTime="2026-04-24 23:53:58.866052352 +0000 UTC m=+69.930175660" Apr 24 23:54:00.572289 systemd[1]: Started sshd@14-10.0.0.107:22-10.0.0.1:55752.service - OpenSSH per-connection server daemon (10.0.0.1:55752). Apr 24 23:54:00.631554 sshd[6059]: Accepted publickey for core from 10.0.0.1 port 55752 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:00.634467 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:00.647807 systemd-logind[1458]: New session 15 of user core. Apr 24 23:54:00.657157 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:54:01.260020 sshd[6059]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:01.265806 systemd[1]: sshd@14-10.0.0.107:22-10.0.0.1:55752.service: Deactivated successfully. Apr 24 23:54:01.268293 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:54:01.270631 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:54:01.272233 systemd-logind[1458]: Removed session 15. Apr 24 23:54:06.310791 systemd[1]: Started sshd@15-10.0.0.107:22-10.0.0.1:37036.service - OpenSSH per-connection server daemon (10.0.0.1:37036). Apr 24 23:54:06.381066 sshd[6100]: Accepted publickey for core from 10.0.0.1 port 37036 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:06.383703 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:06.397584 systemd-logind[1458]: New session 16 of user core. Apr 24 23:54:06.408079 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:54:06.640833 sshd[6100]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:06.657553 systemd[1]: sshd@15-10.0.0.107:22-10.0.0.1:37036.service: Deactivated successfully. Apr 24 23:54:06.660214 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:54:06.669632 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:54:06.675954 systemd[1]: Started sshd@16-10.0.0.107:22-10.0.0.1:37048.service - OpenSSH per-connection server daemon (10.0.0.1:37048). Apr 24 23:54:06.677655 systemd-logind[1458]: Removed session 16. Apr 24 23:54:06.742015 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 37048 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:06.742821 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:06.755707 systemd-logind[1458]: New session 17 of user core. Apr 24 23:54:06.765254 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:54:07.112024 sshd[6114]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:07.121875 systemd[1]: sshd@16-10.0.0.107:22-10.0.0.1:37048.service: Deactivated successfully. Apr 24 23:54:07.123381 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:54:07.125116 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:54:07.129233 systemd[1]: Started sshd@17-10.0.0.107:22-10.0.0.1:37056.service - OpenSSH per-connection server daemon (10.0.0.1:37056). Apr 24 23:54:07.129868 systemd-logind[1458]: Removed session 17. Apr 24 23:54:07.187041 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 37056 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:07.193369 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:07.205249 systemd-logind[1458]: New session 18 of user core. Apr 24 23:54:07.218703 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:54:07.997637 sshd[6130]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:08.013800 systemd[1]: sshd@17-10.0.0.107:22-10.0.0.1:37056.service: Deactivated successfully. Apr 24 23:54:08.015428 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:54:08.016634 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:54:08.023269 systemd[1]: Started sshd@18-10.0.0.107:22-10.0.0.1:37072.service - OpenSSH per-connection server daemon (10.0.0.1:37072). Apr 24 23:54:08.024445 systemd-logind[1458]: Removed session 18. Apr 24 23:54:08.104483 sshd[6155]: Accepted publickey for core from 10.0.0.1 port 37072 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:08.111293 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:08.122791 systemd-logind[1458]: New session 19 of user core. Apr 24 23:54:08.130117 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:54:08.226061 systemd[1]: run-containerd-runc-k8s.io-f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f-runc.Bz4ml8.mount: Deactivated successfully. Apr 24 23:54:08.736555 sshd[6155]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:08.743660 systemd[1]: sshd@18-10.0.0.107:22-10.0.0.1:37072.service: Deactivated successfully. Apr 24 23:54:08.745338 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:54:08.747406 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:54:08.769372 systemd[1]: Started sshd@19-10.0.0.107:22-10.0.0.1:37078.service - OpenSSH per-connection server daemon (10.0.0.1:37078). Apr 24 23:54:08.770816 systemd-logind[1458]: Removed session 19. Apr 24 23:54:08.814704 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 37078 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:08.816384 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:08.822142 systemd-logind[1458]: New session 20 of user core. Apr 24 23:54:08.831152 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:54:09.025920 sshd[6188]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:09.029950 systemd[1]: sshd@19-10.0.0.107:22-10.0.0.1:37078.service: Deactivated successfully. Apr 24 23:54:09.032335 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:54:09.033073 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:54:09.034056 systemd-logind[1458]: Removed session 20. Apr 24 23:54:14.077238 systemd[1]: Started sshd@20-10.0.0.107:22-10.0.0.1:37088.service - OpenSSH per-connection server daemon (10.0.0.1:37088). Apr 24 23:54:14.116854 sshd[6229]: Accepted publickey for core from 10.0.0.1 port 37088 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:14.119653 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:14.125908 systemd-logind[1458]: New session 21 of user core. Apr 24 23:54:14.137162 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:54:14.312858 sshd[6229]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:14.317042 systemd[1]: sshd@20-10.0.0.107:22-10.0.0.1:37088.service: Deactivated successfully. Apr 24 23:54:14.319445 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:54:14.320945 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:54:14.322184 systemd-logind[1458]: Removed session 21. Apr 24 23:54:19.329756 systemd[1]: Started sshd@21-10.0.0.107:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Apr 24 23:54:19.394253 sshd[6255]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:19.395419 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:19.405316 systemd-logind[1458]: New session 22 of user core. Apr 24 23:54:19.412017 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:54:19.647018 sshd[6255]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:19.651700 systemd[1]: sshd@21-10.0.0.107:22-10.0.0.1:53846.service: Deactivated successfully. Apr 24 23:54:19.656889 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:54:19.660044 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:54:19.661193 systemd-logind[1458]: Removed session 22. Apr 24 23:54:21.230487 kubelet[2518]: E0424 23:54:21.230343 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:54:23.833185 systemd[1]: run-containerd-runc-k8s.io-f3e82bf33880877c2ed5bb650469a176eb88fb9627989e47db2745d483f6db6f-runc.NnOGkE.mount: Deactivated successfully. Apr 24 23:54:24.664661 systemd[1]: Started sshd@22-10.0.0.107:22-10.0.0.1:53848.service - OpenSSH per-connection server daemon (10.0.0.1:53848). Apr 24 23:54:24.737226 sshd[6289]: Accepted publickey for core from 10.0.0.1 port 53848 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:54:24.745108 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:54:24.752969 systemd-logind[1458]: New session 23 of user core. Apr 24 23:54:24.769115 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 23:54:24.917617 sshd[6289]: pam_unix(sshd:session): session closed for user core Apr 24 23:54:24.924567 systemd[1]: sshd@22-10.0.0.107:22-10.0.0.1:53848.service: Deactivated successfully. Apr 24 23:54:24.927226 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 23:54:24.928000 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Apr 24 23:54:24.931243 systemd-logind[1458]: Removed session 23.