Apr 24 23:38:00.901377 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:38:00.901396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:38:00.901405 kernel: BIOS-provided physical RAM map: Apr 24 23:38:00.901411 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:38:00.901416 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 24 23:38:00.901421 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 24 23:38:00.901427 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 24 23:38:00.901432 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 24 23:38:00.901437 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 24 23:38:00.901443 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 24 23:38:00.901452 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 24 23:38:00.901460 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 24 23:38:00.901468 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 24 23:38:00.901475 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 24 23:38:00.901483 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 24 23:38:00.901492 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 24 23:38:00.901502 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 24 23:38:00.901511 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 24 23:38:00.901519 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 24 23:38:00.901528 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:38:00.901555 kernel: NX (Execute Disable) protection: active Apr 24 23:38:00.901561 kernel: APIC: Static calls initialized Apr 24 23:38:00.901566 kernel: efi: EFI v2.7 by EDK II Apr 24 23:38:00.901572 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 24 23:38:00.901578 kernel: SMBIOS 2.8 present. Apr 24 23:38:00.901583 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 24 23:38:00.901588 kernel: Hypervisor detected: KVM Apr 24 23:38:00.901596 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:38:00.901601 kernel: kvm-clock: using sched offset of 4576755200 cycles Apr 24 23:38:00.901607 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:38:00.901613 kernel: tsc: Detected 2793.438 MHz processor Apr 24 23:38:00.901619 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:38:00.901625 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:38:00.901631 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 24 23:38:00.901637 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:38:00.901643 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:38:00.901650 kernel: Using GB pages for direct mapping Apr 24 23:38:00.901655 kernel: Secure boot disabled Apr 24 23:38:00.901661 kernel: ACPI: Early table checksum verification disabled Apr 24 23:38:00.901667 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 24 23:38:00.901676 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 24 23:38:00.901682 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:38:00.901688 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:38:00.901695 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 24 23:38:00.901701 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:38:00.901707 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:38:00.901713 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:38:00.901719 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:38:00.901725 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 23:38:00.901745 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 24 23:38:00.901753 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 24 23:38:00.901759 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 24 23:38:00.901765 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 24 23:38:00.901771 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 24 23:38:00.901776 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 24 23:38:00.901782 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 24 23:38:00.901788 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 24 23:38:00.901794 kernel: No NUMA configuration found Apr 24 23:38:00.901800 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 24 23:38:00.901807 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 24 23:38:00.901813 kernel: Zone ranges: Apr 24 23:38:00.901819 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:38:00.901824 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 24 23:38:00.901830 kernel: Normal empty Apr 24 23:38:00.901836 kernel: Movable zone start for each node Apr 24 23:38:00.901842 kernel: Early memory node ranges Apr 24 23:38:00.901848 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:38:00.901854 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 24 23:38:00.901860 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 24 23:38:00.901867 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 24 23:38:00.901873 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 24 23:38:00.901878 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 24 23:38:00.901884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 24 23:38:00.901902 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:38:00.901909 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:38:00.901915 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 24 23:38:00.901921 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:38:00.901927 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 24 23:38:00.901933 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 24 23:38:00.901940 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 24 23:38:00.901946 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:38:00.901952 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:38:00.901958 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:38:00.901964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:38:00.901970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:38:00.901976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:38:00.901982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:38:00.901988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:38:00.901995 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:38:00.902001 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:38:00.902007 kernel: TSC deadline timer available Apr 24 23:38:00.902013 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 24 23:38:00.902019 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:38:00.902025 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:38:00.902031 kernel: kvm-guest: setup PV sched yield Apr 24 23:38:00.902037 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 24 23:38:00.902043 kernel: Booting paravirtualized kernel on KVM Apr 24 23:38:00.902050 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:38:00.902057 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 23:38:00.902063 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 24 23:38:00.902069 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 24 23:38:00.902075 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 23:38:00.902080 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:38:00.902086 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:38:00.902093 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:38:00.902101 kernel: random: crng init done Apr 24 23:38:00.902107 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:38:00.902113 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:38:00.902119 kernel: Fallback order for Node 0: 0 Apr 24 23:38:00.902125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 24 23:38:00.902130 kernel: Policy zone: DMA32 Apr 24 23:38:00.902136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:38:00.902143 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167136K reserved, 0K cma-reserved) Apr 24 23:38:00.902149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 23:38:00.902156 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:38:00.902162 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:38:00.902243 kernel: Dynamic Preempt: voluntary Apr 24 23:38:00.902250 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:38:00.902262 kernel: rcu: RCU event tracing is enabled. Apr 24 23:38:00.902270 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 23:38:00.902277 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:38:00.902283 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:38:00.902290 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:38:00.902296 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:38:00.902303 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 23:38:00.902309 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 23:38:00.902317 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:38:00.902324 kernel: Console: colour dummy device 80x25 Apr 24 23:38:00.902330 kernel: printk: console [ttyS0] enabled Apr 24 23:38:00.902337 kernel: ACPI: Core revision 20230628 Apr 24 23:38:00.902343 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:38:00.902351 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:38:00.902358 kernel: x2apic enabled Apr 24 23:38:00.902365 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:38:00.902371 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:38:00.902378 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:38:00.902384 kernel: kvm-guest: setup PV IPIs Apr 24 23:38:00.902390 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:38:00.902397 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:38:00.902404 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 23:38:00.902412 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:38:00.902418 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 23:38:00.902425 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 23:38:00.902431 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:38:00.902438 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:38:00.902446 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:38:00.902456 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:38:00.902466 kernel: RETBleed: Vulnerable Apr 24 23:38:00.902474 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:38:00.902487 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:38:00.902496 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:38:00.902507 kernel: active return thunk: its_return_thunk Apr 24 23:38:00.902523 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:38:00.902550 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:38:00.902557 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:38:00.902564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:38:00.902570 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:38:00.902577 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:38:00.902585 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:38:00.902592 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:38:00.902598 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:38:00.902605 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:38:00.902611 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:38:00.902618 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:38:00.902625 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:38:00.902631 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:38:00.902637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:38:00.902646 kernel: landlock: Up and running. Apr 24 23:38:00.902652 kernel: SELinux: Initializing. Apr 24 23:38:00.902659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:38:00.902665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:38:00.902672 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 23:38:00.902679 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:38:00.902685 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:38:00.902692 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:38:00.902701 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 23:38:00.902707 kernel: signal: max sigframe size: 3632 Apr 24 23:38:00.902714 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:38:00.902720 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:38:00.902727 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:38:00.902734 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:38:00.902740 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:38:00.902747 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 23:38:00.902753 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 23:38:00.902761 kernel: smpboot: Max logical packages: 1 Apr 24 23:38:00.902768 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 23:38:00.902775 kernel: devtmpfs: initialized Apr 24 23:38:00.902781 kernel: x86/mm: Memory block size: 128MB Apr 24 23:38:00.902788 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 24 23:38:00.902794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 24 23:38:00.902801 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 24 23:38:00.902808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 24 23:38:00.902814 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 24 23:38:00.902822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:38:00.902829 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 23:38:00.902836 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:38:00.902842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:38:00.902849 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:38:00.902855 kernel: audit: type=2000 audit(1777073880.123:1): state=initialized audit_enabled=0 res=1 Apr 24 23:38:00.902862 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:38:00.902868 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:38:00.902875 kernel: cpuidle: using governor menu Apr 24 23:38:00.902883 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:38:00.902889 kernel: dca service started, version 1.12.1 Apr 24 23:38:00.902896 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:38:00.902902 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:38:00.902909 kernel: PCI: Using configuration type 1 for base access Apr 24 23:38:00.902915 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:38:00.902922 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:38:00.902929 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:38:00.902935 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:38:00.902943 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:38:00.902949 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:38:00.902956 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:38:00.902962 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:38:00.902969 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:38:00.902975 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:38:00.902982 kernel: ACPI: Interpreter enabled Apr 24 23:38:00.902988 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:38:00.902995 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:38:00.903002 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:38:00.903009 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:38:00.903015 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:38:00.903022 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:38:00.903143 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:38:00.903247 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:38:00.903304 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:38:00.903313 kernel: PCI host bridge to bus 0000:00 Apr 24 23:38:00.903372 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:38:00.903444 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:38:00.903553 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:38:00.903609 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 24 23:38:00.903658 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:38:00.903706 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 24 23:38:00.903758 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:38:00.903889 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:38:00.903995 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:38:00.904052 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 24 23:38:00.904106 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 24 23:38:00.904194 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 24 23:38:00.904330 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 24 23:38:00.904392 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:38:00.904459 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 24 23:38:00.904566 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 24 23:38:00.904626 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 24 23:38:00.904680 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 24 23:38:00.904743 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 24 23:38:00.904800 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 24 23:38:00.904859 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 24 23:38:00.904915 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 24 23:38:00.904975 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:38:00.905030 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 24 23:38:00.905084 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 24 23:38:00.905138 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 24 23:38:00.905231 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 24 23:38:00.905291 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:38:00.905345 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:38:00.905408 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:38:00.905472 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 24 23:38:00.905573 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 24 23:38:00.905634 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:38:00.905692 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 24 23:38:00.905699 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:38:00.905705 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:38:00.905710 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:38:00.905715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:38:00.905721 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:38:00.905726 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:38:00.905732 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:38:00.905739 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:38:00.905744 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:38:00.905750 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:38:00.905755 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:38:00.905760 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:38:00.905766 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:38:00.905771 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:38:00.905776 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:38:00.905782 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:38:00.905789 kernel: iommu: Default domain type: Translated Apr 24 23:38:00.905795 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:38:00.905800 kernel: efivars: Registered efivars operations Apr 24 23:38:00.905806 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:38:00.905811 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:38:00.905817 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 24 23:38:00.905822 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 24 23:38:00.905827 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 24 23:38:00.905833 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 24 23:38:00.905887 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:38:00.905942 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:38:00.905996 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:38:00.906003 kernel: vgaarb: loaded Apr 24 23:38:00.906009 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:38:00.906015 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:38:00.906020 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:38:00.906026 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:38:00.906032 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:38:00.906039 kernel: pnp: PnP ACPI init Apr 24 23:38:00.906120 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:38:00.906130 kernel: pnp: PnP ACPI: found 6 devices Apr 24 23:38:00.906151 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:38:00.906157 kernel: NET: Registered PF_INET protocol family Apr 24 23:38:00.906191 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:38:00.906197 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:38:00.906203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:38:00.906209 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:38:00.906216 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:38:00.906222 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:38:00.906228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:38:00.906234 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:38:00.906239 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:38:00.906245 kernel: NET: Registered PF_XDP protocol family Apr 24 23:38:00.906306 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 24 23:38:00.906362 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 24 23:38:00.906417 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:38:00.906554 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:38:00.906612 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:38:00.906662 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 24 23:38:00.906711 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:38:00.906760 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 24 23:38:00.906767 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:38:00.906772 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:38:00.906781 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:38:00.906787 kernel: Initialise system trusted keyrings Apr 24 23:38:00.906793 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:38:00.906798 kernel: Key type asymmetric registered Apr 24 23:38:00.906804 kernel: Asymmetric key parser 'x509' registered Apr 24 23:38:00.906809 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:38:00.906815 kernel: io scheduler mq-deadline registered Apr 24 23:38:00.906820 kernel: io scheduler kyber registered Apr 24 23:38:00.906826 kernel: io scheduler bfq registered Apr 24 23:38:00.906833 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:38:00.906839 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:38:00.906845 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:38:00.906850 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 23:38:00.906856 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:38:00.906861 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:38:00.906867 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:38:00.906873 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:38:00.906878 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:38:00.906938 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 23:38:00.906947 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:38:00.906996 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 23:38:00.907066 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T23:38:00 UTC (1777073880) Apr 24 23:38:00.907119 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 24 23:38:00.907125 kernel: intel_pstate: CPU model not supported Apr 24 23:38:00.907131 kernel: efifb: probing for efifb Apr 24 23:38:00.907137 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 24 23:38:00.907145 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 24 23:38:00.907150 kernel: efifb: scrolling: redraw Apr 24 23:38:00.907156 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 24 23:38:00.907161 kernel: Console: switching to colour frame buffer device 100x37 Apr 24 23:38:00.907198 kernel: fb0: EFI VGA frame buffer device Apr 24 23:38:00.907217 kernel: pstore: Using crash dump compression: deflate Apr 24 23:38:00.907224 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:38:00.907230 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:38:00.907235 kernel: Segment Routing with IPv6 Apr 24 23:38:00.907242 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:38:00.907248 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:38:00.907254 kernel: Key type dns_resolver registered Apr 24 23:38:00.907259 kernel: IPI shorthand broadcast: enabled Apr 24 23:38:00.907265 kernel: sched_clock: Marking stable (837014664, 217134194)->(1111611244, -57462386) Apr 24 23:38:00.907271 kernel: registered taskstats version 1 Apr 24 23:38:00.907277 kernel: Loading compiled-in X.509 certificates Apr 24 23:38:00.907282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:38:00.907288 kernel: Key type .fscrypt registered Apr 24 23:38:00.907295 kernel: Key type fscrypt-provisioning registered Apr 24 23:38:00.907301 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:38:00.907306 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:38:00.907312 kernel: ima: No architecture policies found Apr 24 23:38:00.907317 kernel: clk: Disabling unused clocks Apr 24 23:38:00.907323 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:38:00.907329 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:38:00.907334 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:38:00.907340 kernel: Run /init as init process Apr 24 23:38:00.907347 kernel: with arguments: Apr 24 23:38:00.907353 kernel: /init Apr 24 23:38:00.907358 kernel: with environment: Apr 24 23:38:00.907363 kernel: HOME=/ Apr 24 23:38:00.907369 kernel: TERM=linux Apr 24 23:38:00.907376 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:38:00.907384 systemd[1]: Detected virtualization kvm. Apr 24 23:38:00.907392 systemd[1]: Detected architecture x86-64. Apr 24 23:38:00.907398 systemd[1]: Running in initrd. Apr 24 23:38:00.907404 systemd[1]: No hostname configured, using default hostname. Apr 24 23:38:00.907410 systemd[1]: Hostname set to . Apr 24 23:38:00.907416 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:38:00.907424 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:38:00.907430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:38:00.907436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:38:00.907443 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:38:00.907453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:38:00.907463 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:38:00.907472 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:38:00.907483 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:38:00.907495 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:38:00.907505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:38:00.907515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:38:00.907526 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:38:00.907553 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:38:00.907559 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:38:00.907565 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:38:00.907571 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:38:00.907579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:38:00.907587 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:38:00.907593 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:38:00.907599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:38:00.907605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:38:00.907611 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:38:00.907617 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:38:00.907623 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:38:00.907631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:38:00.907650 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:38:00.907656 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:38:00.907662 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:38:00.907668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:38:00.907686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:38:00.907704 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:38:00.907725 systemd-journald[194]: Collecting audit messages is disabled. Apr 24 23:38:00.907742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:38:00.907749 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:38:00.907758 systemd-journald[194]: Journal started Apr 24 23:38:00.907773 systemd-journald[194]: Runtime Journal (/run/log/journal/62898656c1d349399640c1f1cd770a76) is 6.0M, max 48.3M, 42.2M free. Apr 24 23:38:00.911142 systemd-modules-load[195]: Inserted module 'overlay' Apr 24 23:38:00.913968 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:38:00.913441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:00.932426 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:38:00.934370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:38:00.935287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:38:00.942645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:38:00.948358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:38:00.951392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:38:00.956400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:38:00.961150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:38:00.968386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:38:00.978027 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:38:00.980281 kernel: Bridge firewalling registered Apr 24 23:38:00.980079 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 24 23:38:00.981082 dracut-cmdline[224]: dracut-dracut-053 Apr 24 23:38:00.980893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:38:00.985442 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:38:00.990574 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:38:01.004530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:38:01.012683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:38:01.033830 systemd-resolved[254]: Positive Trust Anchors: Apr 24 23:38:01.033858 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:38:01.033883 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:38:01.035790 systemd-resolved[254]: Defaulting to hostname 'linux'. Apr 24 23:38:01.036476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:38:01.038753 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:38:01.088231 kernel: SCSI subsystem initialized Apr 24 23:38:01.096203 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:38:01.107227 kernel: iscsi: registered transport (tcp) Apr 24 23:38:01.126245 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:38:01.126573 kernel: QLogic iSCSI HBA Driver Apr 24 23:38:01.158338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:38:01.173355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:38:01.195284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:38:01.195337 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:38:01.197126 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:38:01.235236 kernel: raid6: avx512x4 gen() 44865 MB/s Apr 24 23:38:01.252227 kernel: raid6: avx512x2 gen() 43376 MB/s Apr 24 23:38:01.269252 kernel: raid6: avx512x1 gen() 44504 MB/s Apr 24 23:38:01.286380 kernel: raid6: avx2x4 gen() 37312 MB/s Apr 24 23:38:01.303519 kernel: raid6: avx2x2 gen() 37451 MB/s Apr 24 23:38:01.321364 kernel: raid6: avx2x1 gen() 28532 MB/s Apr 24 23:38:01.321385 kernel: raid6: using algorithm avx512x4 gen() 44865 MB/s Apr 24 23:38:01.340437 kernel: raid6: .... xor() 10031 MB/s, rmw enabled Apr 24 23:38:01.340491 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:38:01.359217 kernel: xor: automatically using best checksumming function avx Apr 24 23:38:01.483240 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:38:01.492337 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:38:01.505338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:38:01.514466 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 24 23:38:01.517036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:38:01.518572 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:38:01.540757 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 24 23:38:01.564892 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:38:01.584397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:38:01.614833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:38:01.624363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:38:01.631790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:38:01.636533 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:38:01.639082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:38:01.641719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:38:01.654374 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 23:38:01.655345 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:38:01.656572 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:38:01.663309 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 23:38:01.673404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:38:01.681419 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:38:01.681436 kernel: GPT:9289727 != 19775487 Apr 24 23:38:01.681457 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:38:01.681485 kernel: GPT:9289727 != 19775487 Apr 24 23:38:01.681493 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:38:01.681500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:38:01.681507 kernel: libata version 3.00 loaded. Apr 24 23:38:01.673503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:38:01.685200 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:38:01.685212 kernel: AES CTR mode by8 optimization enabled Apr 24 23:38:01.689050 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:38:01.692294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:38:01.692419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:01.695070 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:38:01.708886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:38:01.718125 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Apr 24 23:38:01.711570 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:38:01.721720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:01.726250 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:38:01.730407 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 23:38:01.735344 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:38:01.735361 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (465) Apr 24 23:38:01.735369 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:38:01.735480 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:38:01.738201 kernel: scsi host0: ahci Apr 24 23:38:01.738326 kernel: scsi host1: ahci Apr 24 23:38:01.739754 kernel: scsi host2: ahci Apr 24 23:38:01.742187 kernel: scsi host3: ahci Apr 24 23:38:01.742322 kernel: scsi host4: ahci Apr 24 23:38:01.743215 kernel: scsi host5: ahci Apr 24 23:38:01.745390 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 24 23:38:01.746955 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 24 23:38:01.748506 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 24 23:38:01.748522 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 24 23:38:01.750260 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 23:38:01.757500 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 24 23:38:01.757515 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 24 23:38:01.756745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:38:01.760444 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 23:38:01.762152 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 23:38:01.784429 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:38:01.785591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:38:01.785647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:01.789248 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:38:01.794661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:38:01.805678 disk-uuid[569]: Primary Header is updated. Apr 24 23:38:01.805678 disk-uuid[569]: Secondary Entries is updated. Apr 24 23:38:01.805678 disk-uuid[569]: Secondary Header is updated. Apr 24 23:38:01.809775 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:01.814324 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:38:01.816571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:38:01.821208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:38:01.824213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:38:01.836818 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:38:02.068201 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:38:02.068274 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:38:02.077057 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 23:38:02.077105 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 23:38:02.077116 kernel: ata3.00: applying bridge limits Apr 24 23:38:02.078210 kernel: ata3.00: configured for UDMA/100 Apr 24 23:38:02.082217 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:38:02.082263 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:38:02.085211 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:38:02.087211 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:38:02.124880 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 23:38:02.125107 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:38:02.137234 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:38:02.827221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:38:02.827287 disk-uuid[571]: The operation has completed successfully. Apr 24 23:38:02.849732 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:38:02.849842 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:38:02.874410 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:38:02.878922 sh[600]: Success Apr 24 23:38:02.889203 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:38:02.918290 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:38:02.933433 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:38:02.938847 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:38:02.947787 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:38:02.947811 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:38:02.947820 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:38:02.951001 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:38:02.951021 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:38:02.956694 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:38:02.959155 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:38:02.973306 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:38:02.974825 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:38:02.989199 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:38:02.989230 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:38:02.989238 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:38:02.993213 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:38:03.000469 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:38:03.003724 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:38:03.011668 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:38:03.019307 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:38:03.061447 ignition[710]: Ignition 2.19.0 Apr 24 23:38:03.061465 ignition[710]: Stage: fetch-offline Apr 24 23:38:03.061489 ignition[710]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:38:03.061495 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:38:03.061582 ignition[710]: parsed url from cmdline: "" Apr 24 23:38:03.067316 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:38:03.061584 ignition[710]: no config URL provided Apr 24 23:38:03.061588 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:38:03.061593 ignition[710]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:38:03.061618 ignition[710]: op(1): [started] loading QEMU firmware config module Apr 24 23:38:03.061622 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 23:38:03.072064 ignition[710]: op(1): [finished] loading QEMU firmware config module Apr 24 23:38:03.095380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:38:03.111049 systemd-networkd[789]: lo: Link UP Apr 24 23:38:03.111077 systemd-networkd[789]: lo: Gained carrier Apr 24 23:38:03.111954 systemd-networkd[789]: Enumeration completed Apr 24 23:38:03.112292 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:38:03.113984 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:38:03.113986 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:38:03.114815 systemd[1]: Reached target network.target - Network. Apr 24 23:38:03.114850 systemd-networkd[789]: eth0: Link UP Apr 24 23:38:03.114852 systemd-networkd[789]: eth0: Gained carrier Apr 24 23:38:03.114857 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:38:03.143229 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:38:03.222010 ignition[710]: parsing config with SHA512: 42a0fd632ef2136dd6e9028ade5d734c31d58b132ca6ef09266a20b0e7cde9708ad26dba2ac299ed07998f550a7abef804e67191d5f4766005f92b22dc6c9a90 Apr 24 23:38:03.225594 unknown[710]: fetched base config from "system" Apr 24 23:38:03.225983 ignition[710]: fetch-offline: fetch-offline passed Apr 24 23:38:03.225606 unknown[710]: fetched user config from "qemu" Apr 24 23:38:03.226048 ignition[710]: Ignition finished successfully Apr 24 23:38:03.231928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:38:03.236534 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 23:38:03.249332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:38:03.264605 ignition[793]: Ignition 2.19.0 Apr 24 23:38:03.264625 ignition[793]: Stage: kargs Apr 24 23:38:03.264742 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:38:03.264748 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:38:03.265376 ignition[793]: kargs: kargs passed Apr 24 23:38:03.265404 ignition[793]: Ignition finished successfully Apr 24 23:38:03.274071 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:38:03.287395 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:38:03.301272 ignition[801]: Ignition 2.19.0 Apr 24 23:38:03.301292 ignition[801]: Stage: disks Apr 24 23:38:03.301415 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:38:03.301421 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:38:03.302008 ignition[801]: disks: disks passed Apr 24 23:38:03.302035 ignition[801]: Ignition finished successfully Apr 24 23:38:03.307096 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:38:03.309892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:38:03.312662 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:38:03.316378 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:38:03.319889 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:38:03.323655 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:38:03.332307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:38:03.343792 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:38:03.347752 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:38:03.352707 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:38:03.438228 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:38:03.438718 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:38:03.440042 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:38:03.461416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:38:03.465406 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:38:03.466293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:38:03.466323 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:38:03.466339 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:38:03.482596 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Apr 24 23:38:03.471969 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:38:03.478381 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:38:03.492541 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:38:03.492577 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:38:03.492586 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:38:03.496197 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:38:03.497290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:38:03.517107 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:38:03.522412 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:38:03.526009 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:38:03.530102 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:38:03.599065 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:38:03.608357 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:38:03.612627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:38:03.618594 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:38:03.632971 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:38:03.636210 ignition[934]: INFO : Ignition 2.19.0 Apr 24 23:38:03.636210 ignition[934]: INFO : Stage: mount Apr 24 23:38:03.636210 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:38:03.636210 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:38:03.636210 ignition[934]: INFO : mount: mount passed Apr 24 23:38:03.636210 ignition[934]: INFO : Ignition finished successfully Apr 24 23:38:03.645931 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:38:03.658329 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:38:03.945606 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:38:03.954429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:38:03.962213 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Apr 24 23:38:03.962240 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:38:03.965687 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:38:03.965704 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:38:03.971212 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:38:03.971625 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:38:03.998614 ignition[963]: INFO : Ignition 2.19.0 Apr 24 23:38:03.998614 ignition[963]: INFO : Stage: files Apr 24 23:38:04.001609 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:38:04.001609 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:38:04.001609 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:38:04.001609 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:38:04.001609 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:38:04.012790 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:38:04.012790 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:38:04.012790 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:38:04.012790 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:38:04.012790 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:38:04.002695 unknown[963]: wrote ssh authorized keys file for user: core Apr 24 23:38:04.069009 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:38:04.176355 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:38:04.176355 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:38:04.183073 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 24 23:38:04.462672 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:38:04.778422 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:38:04.778422 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:38:04.784852 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:38:04.788535 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:38:04.788535 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:38:04.788535 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:38:04.796228 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:38:04.799909 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:38:04.799909 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:38:04.799909 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 23:38:04.827297 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:38:04.830832 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:38:04.833694 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 23:38:04.833694 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:38:04.838659 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:38:04.838659 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:38:04.838659 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:38:04.838659 ignition[963]: INFO : files: files passed Apr 24 23:38:04.838659 ignition[963]: INFO : Ignition finished successfully Apr 24 23:38:04.839548 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:38:04.855387 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:38:04.857900 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:38:04.860318 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:38:04.860401 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:38:04.869462 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 23:38:04.874098 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:38:04.874098 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:38:04.871697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:38:04.883939 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:38:04.874443 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:38:04.891328 systemd-networkd[789]: eth0: Gained IPv6LL Apr 24 23:38:04.893295 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:38:04.910433 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:38:04.910651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:38:04.914530 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:38:04.918327 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:38:04.921770 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:38:04.923635 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:38:04.940053 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:38:04.941745 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:38:04.954399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:38:04.957014 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:38:04.957790 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:38:04.961653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:38:04.961745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:38:04.968029 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:38:04.968968 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:38:04.973883 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:38:04.977125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:38:04.980816 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:38:04.984242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:38:04.988103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:38:04.991678 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:38:04.995692 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:38:04.999192 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:38:05.002775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:38:05.002874 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:38:05.007948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:38:05.008905 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:38:05.013805 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:38:05.013921 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:38:05.017646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:38:05.017768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:38:05.024487 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:38:05.024611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:38:05.028792 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:38:05.031946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:38:05.036237 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:38:05.038024 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:38:05.041840 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:38:05.044834 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:38:05.044922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:38:05.048837 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:38:05.048906 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:38:05.051968 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:38:05.052068 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:38:05.055216 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:38:05.055311 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:38:05.075383 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:38:05.079024 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:38:05.080701 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:38:05.084146 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:38:05.088590 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:38:05.090022 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:38:05.094597 ignition[1019]: INFO : Ignition 2.19.0 Apr 24 23:38:05.094597 ignition[1019]: INFO : Stage: umount Apr 24 23:38:05.094597 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:38:05.094597 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:38:05.094597 ignition[1019]: INFO : umount: umount passed Apr 24 23:38:05.094597 ignition[1019]: INFO : Ignition finished successfully Apr 24 23:38:05.096129 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:38:05.096632 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:38:05.096717 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:38:05.098764 systemd[1]: Stopped target network.target - Network. Apr 24 23:38:05.102016 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:38:05.102130 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:38:05.105685 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:38:05.105748 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:38:05.110867 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:38:05.110900 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:38:05.114001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:38:05.114032 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:38:05.117699 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:38:05.120922 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:38:05.124327 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:38:05.124408 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:38:05.131919 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:38:05.132157 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:38:05.134475 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:38:05.134513 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:38:05.139285 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:38:05.139367 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:38:05.141400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:38:05.141453 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:38:05.167326 systemd-networkd[789]: eth0: DHCPv6 lease lost Apr 24 23:38:05.171869 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:38:05.171988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:38:05.177312 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:38:05.177373 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:38:05.187285 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:38:05.187992 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:38:05.188030 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:38:05.191635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:38:05.191675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:38:05.195635 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:38:05.195690 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:38:05.199692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:38:05.212050 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:38:05.212158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:38:05.226834 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:38:05.227017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:38:05.230915 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:38:05.230943 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:38:05.234928 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:38:05.234954 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:38:05.235881 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:38:05.235908 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:38:05.243727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:38:05.243762 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:38:05.249091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:38:05.249124 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:38:05.270318 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:38:05.271212 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:38:05.271257 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:38:05.276670 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:38:05.276701 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:38:05.280635 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:38:05.280666 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:38:05.284901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:38:05.284932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:05.289007 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:38:05.289089 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:38:05.295027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:38:05.296621 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:38:05.311009 systemd[1]: Switching root. Apr 24 23:38:05.340907 systemd-journald[194]: Journal stopped Apr 24 23:38:06.053965 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 24 23:38:06.054011 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:38:06.054022 kernel: SELinux: policy capability open_perms=1 Apr 24 23:38:06.054030 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:38:06.054040 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:38:06.054051 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:38:06.054058 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:38:06.054065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:38:06.054073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:38:06.054080 kernel: audit: type=1403 audit(1777073885.450:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:38:06.054090 systemd[1]: Successfully loaded SELinux policy in 37.803ms. Apr 24 23:38:06.054106 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.818ms. Apr 24 23:38:06.054116 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:38:06.054123 systemd[1]: Detected virtualization kvm. Apr 24 23:38:06.054132 systemd[1]: Detected architecture x86-64. Apr 24 23:38:06.054139 systemd[1]: Detected first boot. Apr 24 23:38:06.054147 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:38:06.054154 zram_generator::config[1063]: No configuration found. Apr 24 23:38:06.054206 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:38:06.054216 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:38:06.054225 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:38:06.054235 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:38:06.054244 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:38:06.054251 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:38:06.054259 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:38:06.054267 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:38:06.054275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:38:06.054286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:38:06.054294 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:38:06.054303 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:38:06.054311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:38:06.054319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:38:06.054327 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:38:06.054334 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:38:06.054342 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:38:06.054351 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:38:06.054358 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:38:06.054366 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:38:06.054376 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:38:06.054386 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:38:06.054396 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:38:06.054405 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:38:06.054412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:38:06.054420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:38:06.054428 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:38:06.054435 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:38:06.054444 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:38:06.054451 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:38:06.054459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:38:06.054467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:38:06.054474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:38:06.054482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:38:06.054489 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:38:06.054497 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:38:06.054505 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:38:06.054513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:38:06.054521 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:38:06.054528 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:38:06.054536 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:38:06.054544 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:38:06.054552 systemd[1]: Reached target machines.target - Containers. Apr 24 23:38:06.054559 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:38:06.054588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:38:06.054598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:38:06.054606 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:38:06.054613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:38:06.054621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:38:06.054629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:38:06.054636 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:38:06.054644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:38:06.054653 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:38:06.054660 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:38:06.054670 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:38:06.054677 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:38:06.054685 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:38:06.054692 kernel: fuse: init (API version 7.39) Apr 24 23:38:06.054699 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:38:06.054706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:38:06.054714 kernel: ACPI: bus type drm_connector registered Apr 24 23:38:06.054721 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:38:06.054728 kernel: loop: module loaded Apr 24 23:38:06.054748 systemd-journald[1147]: Collecting audit messages is disabled. Apr 24 23:38:06.054765 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:38:06.054774 systemd-journald[1147]: Journal started Apr 24 23:38:06.054843 systemd-journald[1147]: Runtime Journal (/run/log/journal/62898656c1d349399640c1f1cd770a76) is 6.0M, max 48.3M, 42.2M free. Apr 24 23:38:05.771888 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:38:05.789068 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 23:38:05.789424 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:38:06.061216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:38:06.064462 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:38:06.065994 systemd[1]: Stopped verity-setup.service. Apr 24 23:38:06.066021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:38:06.073454 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:38:06.073920 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:38:06.075865 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:38:06.077881 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:38:06.079725 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:38:06.081665 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:38:06.083674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:38:06.085596 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:38:06.087859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:38:06.090273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:38:06.090397 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:38:06.092694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:38:06.092801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:38:06.094942 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:38:06.095080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:38:06.097097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:38:06.097257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:38:06.099665 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:38:06.099799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:38:06.101916 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:38:06.102051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:38:06.104157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:38:06.106390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:38:06.108835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:38:06.111232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:38:06.120916 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:38:06.129290 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:38:06.132125 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:38:06.134116 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:38:06.134157 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:38:06.136847 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:38:06.139871 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:38:06.142668 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:38:06.144487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:38:06.145782 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:38:06.147224 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:38:06.150816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:38:06.152101 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:38:06.154288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:38:06.155044 systemd-journald[1147]: Time spent on flushing to /var/log/journal/62898656c1d349399640c1f1cd770a76 is 10.426ms for 997 entries. Apr 24 23:38:06.155044 systemd-journald[1147]: System Journal (/var/log/journal/62898656c1d349399640c1f1cd770a76) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:38:06.172946 systemd-journald[1147]: Received client request to flush runtime journal. Apr 24 23:38:06.155363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:38:06.164336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:38:06.171801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:38:06.178090 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:38:06.181404 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:38:06.184208 kernel: loop0: detected capacity change from 0 to 142488 Apr 24 23:38:06.184888 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:38:06.187272 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:38:06.190113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:38:06.192966 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:38:06.197496 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:38:06.205149 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 24 23:38:06.205387 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 24 23:38:06.212220 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:38:06.213975 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:38:06.216971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:38:06.220767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:38:06.227228 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:38:06.230427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:38:06.231051 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:38:06.241455 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:38:06.248238 kernel: loop1: detected capacity change from 0 to 140768 Apr 24 23:38:06.268871 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:38:06.283311 kernel: loop2: detected capacity change from 0 to 219192 Apr 24 23:38:06.279430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:38:06.292896 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 24 23:38:06.292909 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 24 23:38:06.296115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:38:06.318238 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:38:06.336215 kernel: loop4: detected capacity change from 0 to 140768 Apr 24 23:38:06.349214 kernel: loop5: detected capacity change from 0 to 219192 Apr 24 23:38:06.357001 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 23:38:06.357370 (sd-merge)[1205]: Merged extensions into '/usr'. Apr 24 23:38:06.361304 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:38:06.361331 systemd[1]: Reloading... Apr 24 23:38:06.394203 zram_generator::config[1228]: No configuration found. Apr 24 23:38:06.424255 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:38:06.483296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:38:06.513792 systemd[1]: Reloading finished in 152 ms. Apr 24 23:38:06.547955 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:38:06.550349 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:38:06.552734 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:38:06.565796 systemd[1]: Starting ensure-sysext.service... Apr 24 23:38:06.568296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:38:06.571302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:38:06.574829 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:38:06.574838 systemd[1]: Reloading... Apr 24 23:38:06.586509 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:38:06.586749 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:38:06.587325 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:38:06.587501 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 24 23:38:06.587555 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 24 23:38:06.589505 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:38:06.589527 systemd-tmpfiles[1271]: Skipping /boot Apr 24 23:38:06.591681 systemd-udevd[1272]: Using default interface naming scheme 'v255'. Apr 24 23:38:06.595040 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:38:06.595072 systemd-tmpfiles[1271]: Skipping /boot Apr 24 23:38:06.625200 zram_generator::config[1307]: No configuration found. Apr 24 23:38:06.657249 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1312) Apr 24 23:38:06.677269 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:38:06.686217 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:38:06.704339 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:38:06.707394 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:38:06.711266 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 24 23:38:06.711450 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:38:06.714800 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:38:06.714930 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:38:06.734234 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:38:06.748250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:38:06.751075 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:38:06.751154 systemd[1]: Reloading finished in 176 ms. Apr 24 23:38:06.805682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:38:06.846305 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:38:06.861558 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:38:06.867770 systemd[1]: Finished ensure-sysext.service. Apr 24 23:38:06.879787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:38:06.890418 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:38:06.893946 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:38:06.896245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:38:06.897212 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:38:06.900106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:38:06.904312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:38:06.906989 lvm[1372]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:38:06.907802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:38:06.911945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:38:06.914133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:38:06.915913 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:38:06.919038 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:38:06.923505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:38:06.928418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:38:06.933905 augenrules[1395]: No rules Apr 24 23:38:06.934236 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:38:06.938398 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:38:06.941825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:38:06.943785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:38:06.944410 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:38:06.946681 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:38:06.949334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:38:06.949444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:38:06.951791 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:38:06.951930 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:38:06.954107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:38:06.954252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:38:06.956663 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:38:06.956789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:38:06.958955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:38:06.961550 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:38:06.964318 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:38:06.973051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:38:06.983433 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:38:06.984198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:38:06.984305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:38:06.985280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:38:06.989848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:38:06.990558 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:38:06.991774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:38:06.994450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:38:06.996899 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:38:06.998989 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:38:07.016937 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:38:07.027222 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:38:07.060008 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:38:07.060855 systemd-networkd[1389]: lo: Link UP Apr 24 23:38:07.060858 systemd-networkd[1389]: lo: Gained carrier Apr 24 23:38:07.061695 systemd-networkd[1389]: Enumeration completed Apr 24 23:38:07.062213 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:38:07.062215 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:38:07.062812 systemd-networkd[1389]: eth0: Link UP Apr 24 23:38:07.062830 systemd-networkd[1389]: eth0: Gained carrier Apr 24 23:38:07.062840 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:38:07.063090 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:38:07.065086 systemd-resolved[1391]: Positive Trust Anchors: Apr 24 23:38:07.065122 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:38:07.065146 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:38:07.065702 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:38:07.068027 systemd-resolved[1391]: Defaulting to hostname 'linux'. Apr 24 23:38:07.076346 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:38:07.078661 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:38:07.080640 systemd[1]: Reached target network.target - Network. Apr 24 23:38:07.081617 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:38:07.082265 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Apr 24 23:38:07.082474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:38:07.657314 systemd-resolved[1391]: Clock change detected. Flushing caches. Apr 24 23:38:07.657339 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 23:38:07.657382 systemd-timesyncd[1394]: Initial clock synchronization to Fri 2026-04-24 23:38:07.657268 UTC. Apr 24 23:38:07.658634 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:38:07.660553 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:38:07.662727 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:38:07.665039 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:38:07.667003 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:38:07.669227 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:38:07.671542 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:38:07.671578 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:38:07.673170 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:38:07.675405 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:38:07.678432 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:38:07.688796 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:38:07.691194 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:38:07.693344 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:38:07.695199 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:38:07.696887 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:38:07.696920 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:38:07.697820 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:38:07.700386 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:38:07.702777 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:38:07.705269 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:38:07.707085 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:38:07.710154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:38:07.712840 jq[1436]: false Apr 24 23:38:07.713374 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:38:07.718252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:38:07.721271 extend-filesystems[1437]: Found loop3 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found loop4 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found loop5 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found sr0 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda1 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda2 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda3 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found usr Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda4 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda6 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda7 Apr 24 23:38:07.722917 extend-filesystems[1437]: Found vda9 Apr 24 23:38:07.722917 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 24 23:38:07.750550 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 23:38:07.750577 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1316) Apr 24 23:38:07.726858 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:38:07.750637 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 24 23:38:07.728724 dbus-daemon[1435]: [system] SELinux support is enabled Apr 24 23:38:07.747095 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:38:07.755834 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:38:07.755002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:38:07.755342 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:38:07.757828 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:38:07.765656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:38:07.785846 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 23:38:07.785883 update_engine[1457]: I20260424 23:38:07.777937 1457 main.cc:92] Flatcar Update Engine starting Apr 24 23:38:07.785883 update_engine[1457]: I20260424 23:38:07.781850 1457 update_check_scheduler.cc:74] Next update check in 9m32s Apr 24 23:38:07.768681 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:38:07.786082 jq[1458]: true Apr 24 23:38:07.778037 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:38:07.788826 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 23:38:07.788826 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 23:38:07.788826 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 23:38:07.778211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:38:07.800004 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 24 23:38:07.778401 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:38:07.778502 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:38:07.801751 jq[1462]: true Apr 24 23:38:07.782456 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:38:07.782613 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:38:07.786079 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:38:07.786092 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:38:07.788357 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:38:07.788484 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:38:07.789642 systemd-logind[1453]: New seat seat0. Apr 24 23:38:07.793487 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:38:07.801859 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:38:07.802827 dbus-daemon[1435]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 23:38:07.809488 tar[1461]: linux-amd64/LICENSE Apr 24 23:38:07.810943 tar[1461]: linux-amd64/helm Apr 24 23:38:07.812584 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:38:07.815065 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:38:07.815229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:38:07.817559 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:38:07.817654 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:38:07.824323 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:38:07.857031 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:38:07.857717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:38:07.861237 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:38:07.877044 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:38:07.952261 containerd[1463]: time="2026-04-24T23:38:07.951461213Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:38:07.971501 containerd[1463]: time="2026-04-24T23:38:07.971452292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973176 containerd[1463]: time="2026-04-24T23:38:07.973137107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973176 containerd[1463]: time="2026-04-24T23:38:07.973172517Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:38:07.973232 containerd[1463]: time="2026-04-24T23:38:07.973184673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:38:07.973324 containerd[1463]: time="2026-04-24T23:38:07.973295100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:38:07.973351 containerd[1463]: time="2026-04-24T23:38:07.973326485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973405 containerd[1463]: time="2026-04-24T23:38:07.973366559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973405 containerd[1463]: time="2026-04-24T23:38:07.973393838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973703 containerd[1463]: time="2026-04-24T23:38:07.973554218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973703 containerd[1463]: time="2026-04-24T23:38:07.973579560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973703 containerd[1463]: time="2026-04-24T23:38:07.973590207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973703 containerd[1463]: time="2026-04-24T23:38:07.973597204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973703 containerd[1463]: time="2026-04-24T23:38:07.973648609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.973813 containerd[1463]: time="2026-04-24T23:38:07.973777111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:38:07.974033 containerd[1463]: time="2026-04-24T23:38:07.973870415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:38:07.974033 containerd[1463]: time="2026-04-24T23:38:07.973882225Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:38:07.974033 containerd[1463]: time="2026-04-24T23:38:07.973942835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:38:07.974033 containerd[1463]: time="2026-04-24T23:38:07.973969893Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:38:07.979375 containerd[1463]: time="2026-04-24T23:38:07.979315336Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:38:07.979375 containerd[1463]: time="2026-04-24T23:38:07.979356708Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:38:07.979375 containerd[1463]: time="2026-04-24T23:38:07.979370479Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:38:07.979466 containerd[1463]: time="2026-04-24T23:38:07.979383066Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:38:07.979466 containerd[1463]: time="2026-04-24T23:38:07.979393334Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:38:07.979503 containerd[1463]: time="2026-04-24T23:38:07.979486855Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979672962Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979745931Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979756556Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979765733Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979774648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979783992Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979792462Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979801420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979818289Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979828707Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979837504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979845649Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979859835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.979945 containerd[1463]: time="2026-04-24T23:38:07.979869755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979880536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979889913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979898468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979908703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979917000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979926145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979935394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979945928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979954070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979962304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979972181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979982715Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.979996162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.980004461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980150 containerd[1463]: time="2026-04-24T23:38:07.980017068Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980049983Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980063240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980070067Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980078191Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980085214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980093560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980101111Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:38:07.980336 containerd[1463]: time="2026-04-24T23:38:07.980158312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:38:07.980438 containerd[1463]: time="2026-04-24T23:38:07.980351842Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:38:07.980438 containerd[1463]: time="2026-04-24T23:38:07.980394679Z" level=info msg="Connect containerd service" Apr 24 23:38:07.980438 containerd[1463]: time="2026-04-24T23:38:07.980422739Z" level=info msg="using legacy CRI server" Apr 24 23:38:07.980438 containerd[1463]: time="2026-04-24T23:38:07.980427270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:38:07.980610 containerd[1463]: time="2026-04-24T23:38:07.980508275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.980937763Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981239462Z" level=info msg="Start subscribing containerd event" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981264459Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981276164Z" level=info msg="Start recovering state" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981295662Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981341396Z" level=info msg="Start event monitor" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981355612Z" level=info msg="Start snapshots syncer" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981362291Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981368043Z" level=info msg="Start streaming server" Apr 24 23:38:07.981646 containerd[1463]: time="2026-04-24T23:38:07.981438045Z" level=info msg="containerd successfully booted in 0.030932s" Apr 24 23:38:07.981660 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:38:08.104269 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:38:08.122865 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:38:08.137397 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:38:08.142866 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:38:08.143090 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:38:08.146340 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:38:08.156634 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:38:08.165424 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:38:08.168006 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:38:08.170090 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:38:08.202722 tar[1461]: linux-amd64/README.md Apr 24 23:38:08.213729 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:38:09.176450 systemd-networkd[1389]: eth0: Gained IPv6LL Apr 24 23:38:09.178903 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:38:09.181678 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:38:09.195385 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 23:38:09.198698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:09.201369 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:38:09.213896 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 23:38:09.214015 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 23:38:09.216494 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:38:09.222168 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:38:09.848216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:09.850838 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:38:09.851981 systemd[1]: Startup finished in 959ms (kernel) + 4.746s (initrd) + 3.862s (userspace) = 9.567s. Apr 24 23:38:09.853357 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:38:10.221496 kubelet[1547]: E0424 23:38:10.221375 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:38:10.223590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:38:10.223714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:38:14.615854 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:38:14.617052 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:36760.service - OpenSSH per-connection server daemon (10.0.0.1:36760). Apr 24 23:38:14.665254 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 36760 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:14.667545 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:14.675768 systemd-logind[1453]: New session 1 of user core. Apr 24 23:38:14.676628 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:38:14.689436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:38:14.698905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:38:14.700915 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:38:14.706988 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:38:14.774737 systemd[1565]: Queued start job for default target default.target. Apr 24 23:38:14.784018 systemd[1565]: Created slice app.slice - User Application Slice. Apr 24 23:38:14.784066 systemd[1565]: Reached target paths.target - Paths. Apr 24 23:38:14.784078 systemd[1565]: Reached target timers.target - Timers. Apr 24 23:38:14.785280 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:38:14.794924 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:38:14.794984 systemd[1565]: Reached target sockets.target - Sockets. Apr 24 23:38:14.794993 systemd[1565]: Reached target basic.target - Basic System. Apr 24 23:38:14.795016 systemd[1565]: Reached target default.target - Main User Target. Apr 24 23:38:14.795034 systemd[1565]: Startup finished in 83ms. Apr 24 23:38:14.795273 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:38:14.796379 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:38:14.858470 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:36762.service - OpenSSH per-connection server daemon (10.0.0.1:36762). Apr 24 23:38:14.895818 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 36762 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:14.896901 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:14.900602 systemd-logind[1453]: New session 2 of user core. Apr 24 23:38:14.910293 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:38:14.962356 sshd[1576]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:14.973002 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:36762.service: Deactivated successfully. Apr 24 23:38:14.974403 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:38:14.975498 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:38:14.976460 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:36774.service - OpenSSH per-connection server daemon (10.0.0.1:36774). Apr 24 23:38:14.977074 systemd-logind[1453]: Removed session 2. Apr 24 23:38:15.006555 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 36774 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:15.007759 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:15.011281 systemd-logind[1453]: New session 3 of user core. Apr 24 23:38:15.021343 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:38:15.069706 sshd[1583]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:15.083277 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:36774.service: Deactivated successfully. Apr 24 23:38:15.084520 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:38:15.085649 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:38:15.093456 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:36778.service - OpenSSH per-connection server daemon (10.0.0.1:36778). Apr 24 23:38:15.094408 systemd-logind[1453]: Removed session 3. Apr 24 23:38:15.120678 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 36778 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:15.122040 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:15.125906 systemd-logind[1453]: New session 4 of user core. Apr 24 23:38:15.136297 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:38:15.189294 sshd[1590]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:15.208508 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:36778.service: Deactivated successfully. Apr 24 23:38:15.209746 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:38:15.210787 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:38:15.219355 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:36790.service - OpenSSH per-connection server daemon (10.0.0.1:36790). Apr 24 23:38:15.220074 systemd-logind[1453]: Removed session 4. Apr 24 23:38:15.245689 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 36790 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:15.246737 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:15.250497 systemd-logind[1453]: New session 5 of user core. Apr 24 23:38:15.264253 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:38:15.319180 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:38:15.319386 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:38:15.339431 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 24 23:38:15.341018 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:15.351193 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:36790.service: Deactivated successfully. Apr 24 23:38:15.352375 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:38:15.353517 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:38:15.354594 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:36796.service - OpenSSH per-connection server daemon (10.0.0.1:36796). Apr 24 23:38:15.355138 systemd-logind[1453]: Removed session 5. Apr 24 23:38:15.384143 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 36796 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:15.385177 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:15.388746 systemd-logind[1453]: New session 6 of user core. Apr 24 23:38:15.404333 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:38:15.455896 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:38:15.456157 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:38:15.459559 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 24 23:38:15.463490 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:38:15.463722 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:38:15.486390 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:38:15.488108 auditctl[1612]: No rules Apr 24 23:38:15.488894 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:38:15.489082 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:38:15.490648 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:38:15.513969 augenrules[1630]: No rules Apr 24 23:38:15.515036 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:38:15.515782 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 24 23:38:15.517522 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:15.522989 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:36796.service: Deactivated successfully. Apr 24 23:38:15.524384 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:38:15.525565 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:38:15.536421 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:36810.service - OpenSSH per-connection server daemon (10.0.0.1:36810). Apr 24 23:38:15.537251 systemd-logind[1453]: Removed session 6. Apr 24 23:38:15.562531 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 36810 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:38:15.563458 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:15.566863 systemd-logind[1453]: New session 7 of user core. Apr 24 23:38:15.576258 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:38:15.626646 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:38:15.626864 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:38:16.268381 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:38:16.268465 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:38:17.114848 dockerd[1659]: time="2026-04-24T23:38:17.114706157Z" level=info msg="Starting up" Apr 24 23:38:17.537409 dockerd[1659]: time="2026-04-24T23:38:17.537225742Z" level=info msg="Loading containers: start." Apr 24 23:38:17.652143 kernel: Initializing XFRM netlink socket Apr 24 23:38:17.726350 systemd-networkd[1389]: docker0: Link UP Apr 24 23:38:17.748714 dockerd[1659]: time="2026-04-24T23:38:17.748654427Z" level=info msg="Loading containers: done." Apr 24 23:38:17.986063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3309333670-merged.mount: Deactivated successfully. Apr 24 23:38:17.988704 dockerd[1659]: time="2026-04-24T23:38:17.988572495Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:38:17.988953 dockerd[1659]: time="2026-04-24T23:38:17.988900283Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:38:17.989235 dockerd[1659]: time="2026-04-24T23:38:17.989196864Z" level=info msg="Daemon has completed initialization" Apr 24 23:38:18.033658 dockerd[1659]: time="2026-04-24T23:38:18.033511904Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:38:18.035846 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:38:18.564772 containerd[1463]: time="2026-04-24T23:38:18.564672068Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 24 23:38:19.226415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303264863.mount: Deactivated successfully. Apr 24 23:38:20.411536 containerd[1463]: time="2026-04-24T23:38:20.411442049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:20.412166 containerd[1463]: time="2026-04-24T23:38:20.412076285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 24 23:38:20.413496 containerd[1463]: time="2026-04-24T23:38:20.413416779Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:20.416305 containerd[1463]: time="2026-04-24T23:38:20.416241162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:20.420309 containerd[1463]: time="2026-04-24T23:38:20.418084927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.853351203s" Apr 24 23:38:20.420309 containerd[1463]: time="2026-04-24T23:38:20.418165059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 24 23:38:20.421807 containerd[1463]: time="2026-04-24T23:38:20.421748380Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 24 23:38:20.473987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:38:20.489328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:20.595568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:20.599049 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:38:20.855528 kubelet[1876]: E0424 23:38:20.855394 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:38:20.857887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:38:20.858021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:38:21.550435 containerd[1463]: time="2026-04-24T23:38:21.550385251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.551090 containerd[1463]: time="2026-04-24T23:38:21.551048561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 24 23:38:21.552556 containerd[1463]: time="2026-04-24T23:38:21.552514970Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.555204 containerd[1463]: time="2026-04-24T23:38:21.555156438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:21.556316 containerd[1463]: time="2026-04-24T23:38:21.556223405Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.134421379s" Apr 24 23:38:21.556316 containerd[1463]: time="2026-04-24T23:38:21.556293754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 24 23:38:21.557545 containerd[1463]: time="2026-04-24T23:38:21.557519322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 24 23:38:22.582236 containerd[1463]: time="2026-04-24T23:38:22.582164139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:22.583028 containerd[1463]: time="2026-04-24T23:38:22.582970734Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 24 23:38:22.583995 containerd[1463]: time="2026-04-24T23:38:22.583930160Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:22.587336 containerd[1463]: time="2026-04-24T23:38:22.587269882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:22.588741 containerd[1463]: time="2026-04-24T23:38:22.588689601Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.031137906s" Apr 24 23:38:22.588741 containerd[1463]: time="2026-04-24T23:38:22.588737922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 24 23:38:22.589829 containerd[1463]: time="2026-04-24T23:38:22.589803486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 24 23:38:23.674790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60998886.mount: Deactivated successfully. Apr 24 23:38:24.119985 containerd[1463]: time="2026-04-24T23:38:24.119841853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:24.120704 containerd[1463]: time="2026-04-24T23:38:24.120638084Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 24 23:38:24.121401 containerd[1463]: time="2026-04-24T23:38:24.121361081Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:24.123463 containerd[1463]: time="2026-04-24T23:38:24.123422481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:24.124023 containerd[1463]: time="2026-04-24T23:38:24.123994550Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.534159099s" Apr 24 23:38:24.124052 containerd[1463]: time="2026-04-24T23:38:24.124030325Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 24 23:38:24.125169 containerd[1463]: time="2026-04-24T23:38:24.125142098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 24 23:38:24.506610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181905521.mount: Deactivated successfully. Apr 24 23:38:25.618424 containerd[1463]: time="2026-04-24T23:38:25.618328728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.619173 containerd[1463]: time="2026-04-24T23:38:25.619089937Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 24 23:38:25.620474 containerd[1463]: time="2026-04-24T23:38:25.620423357Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.623448 containerd[1463]: time="2026-04-24T23:38:25.623331408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:25.624558 containerd[1463]: time="2026-04-24T23:38:25.624506130Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.499343128s" Apr 24 23:38:25.624558 containerd[1463]: time="2026-04-24T23:38:25.624548816Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 24 23:38:25.625989 containerd[1463]: time="2026-04-24T23:38:25.625710604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 23:38:25.994443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822838504.mount: Deactivated successfully. Apr 24 23:38:26.002462 containerd[1463]: time="2026-04-24T23:38:26.002380667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:26.003014 containerd[1463]: time="2026-04-24T23:38:26.002982660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 24 23:38:26.003985 containerd[1463]: time="2026-04-24T23:38:26.003944368Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:26.006096 containerd[1463]: time="2026-04-24T23:38:26.006050580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:26.006629 containerd[1463]: time="2026-04-24T23:38:26.006601236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 380.857685ms" Apr 24 23:38:26.006714 containerd[1463]: time="2026-04-24T23:38:26.006634958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 23:38:26.007816 containerd[1463]: time="2026-04-24T23:38:26.007791852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 24 23:38:26.460280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320065853.mount: Deactivated successfully. Apr 24 23:38:27.102407 containerd[1463]: time="2026-04-24T23:38:27.102315502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:27.102994 containerd[1463]: time="2026-04-24T23:38:27.102958384Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 24 23:38:27.104137 containerd[1463]: time="2026-04-24T23:38:27.104059092Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:27.106571 containerd[1463]: time="2026-04-24T23:38:27.106528254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:27.108359 containerd[1463]: time="2026-04-24T23:38:27.108319466Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.100505719s" Apr 24 23:38:27.108359 containerd[1463]: time="2026-04-24T23:38:27.108355684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 24 23:38:29.275717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:29.290382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:29.312421 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit session-7.scope)... Apr 24 23:38:29.312447 systemd[1]: Reloading... Apr 24 23:38:29.373744 zram_generator::config[2087]: No configuration found. Apr 24 23:38:29.461636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:38:29.508073 systemd[1]: Reloading finished in 195 ms. Apr 24 23:38:29.557086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:29.558967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:29.560593 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:38:29.560850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:29.562721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:29.666983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:29.670492 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:38:29.714432 kubelet[2137]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:38:29.714432 kubelet[2137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:38:29.714804 kubelet[2137]: I0424 23:38:29.714482 2137 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:38:30.014099 kubelet[2137]: I0424 23:38:30.013979 2137 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 23:38:30.014099 kubelet[2137]: I0424 23:38:30.014016 2137 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:38:30.014099 kubelet[2137]: I0424 23:38:30.014039 2137 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:38:30.014099 kubelet[2137]: I0424 23:38:30.014046 2137 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:38:30.014316 kubelet[2137]: I0424 23:38:30.014293 2137 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:38:30.037251 kubelet[2137]: E0424 23:38:30.037190 2137 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:38:30.038225 kubelet[2137]: I0424 23:38:30.038189 2137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:38:30.044932 kubelet[2137]: E0424 23:38:30.044856 2137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:38:30.044932 kubelet[2137]: I0424 23:38:30.044911 2137 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:38:30.048352 kubelet[2137]: I0424 23:38:30.048303 2137 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:38:30.050098 kubelet[2137]: I0424 23:38:30.050040 2137 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:38:30.050295 kubelet[2137]: I0424 23:38:30.050084 2137 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:38:30.050295 kubelet[2137]: I0424 23:38:30.050282 2137 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:38:30.050295 kubelet[2137]: I0424 23:38:30.050290 2137 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 23:38:30.050415 kubelet[2137]: I0424 23:38:30.050368 2137 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:38:30.053383 kubelet[2137]: I0424 23:38:30.053326 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:38:30.053569 kubelet[2137]: I0424 23:38:30.053528 2137 kubelet.go:475] "Attempting to sync node with API server" Apr 24 23:38:30.053569 kubelet[2137]: I0424 23:38:30.053552 2137 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:38:30.053569 kubelet[2137]: I0424 23:38:30.053570 2137 kubelet.go:387] "Adding apiserver pod source" Apr 24 23:38:30.053625 kubelet[2137]: I0424 23:38:30.053578 2137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:38:30.055442 kubelet[2137]: E0424 23:38:30.055367 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:38:30.055604 kubelet[2137]: E0424 23:38:30.055579 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:38:30.056274 kubelet[2137]: I0424 23:38:30.056256 2137 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:38:30.057262 kubelet[2137]: I0424 23:38:30.057229 2137 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:38:30.057299 kubelet[2137]: I0424 23:38:30.057266 2137 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:38:30.057327 kubelet[2137]: W0424 23:38:30.057302 2137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:38:30.060401 kubelet[2137]: I0424 23:38:30.060366 2137 server.go:1262] "Started kubelet" Apr 24 23:38:30.061976 kubelet[2137]: I0424 23:38:30.061396 2137 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:38:30.061976 kubelet[2137]: I0424 23:38:30.061603 2137 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:38:30.062992 kubelet[2137]: I0424 23:38:30.062586 2137 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:38:30.062992 kubelet[2137]: I0424 23:38:30.062706 2137 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:38:30.062992 kubelet[2137]: I0424 23:38:30.062933 2137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:38:30.068215 kubelet[2137]: I0424 23:38:30.067782 2137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:38:30.068352 kubelet[2137]: I0424 23:38:30.068298 2137 server.go:310] "Adding debug handlers to kubelet server" Apr 24 23:38:30.071291 kubelet[2137]: E0424 23:38:30.069596 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a96f4e4cc1835f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:38:30.060254047 +0000 UTC m=+0.386384195,LastTimestamp:2026-04-24 23:38:30.060254047 +0000 UTC m=+0.386384195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:38:30.071443 kubelet[2137]: E0424 23:38:30.071401 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:38:30.071478 kubelet[2137]: I0424 23:38:30.071468 2137 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 23:38:30.072972 kubelet[2137]: I0424 23:38:30.071627 2137 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:38:30.072972 kubelet[2137]: I0424 23:38:30.071714 2137 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:38:30.072972 kubelet[2137]: E0424 23:38:30.072584 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:38:30.072972 kubelet[2137]: I0424 23:38:30.072640 2137 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:38:30.072972 kubelet[2137]: E0424 23:38:30.072685 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Apr 24 23:38:30.072972 kubelet[2137]: I0424 23:38:30.072752 2137 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:38:30.074942 kubelet[2137]: E0424 23:38:30.074914 2137 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:38:30.075834 kubelet[2137]: I0424 23:38:30.075747 2137 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:38:30.087568 kubelet[2137]: I0424 23:38:30.087511 2137 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:38:30.087568 kubelet[2137]: I0424 23:38:30.087538 2137 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:38:30.087568 kubelet[2137]: I0424 23:38:30.087551 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:38:30.090188 kubelet[2137]: I0424 23:38:30.090104 2137 policy_none.go:49] "None policy: Start" Apr 24 23:38:30.090188 kubelet[2137]: I0424 23:38:30.090171 2137 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:38:30.090188 kubelet[2137]: I0424 23:38:30.090179 2137 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:38:30.091370 kubelet[2137]: I0424 23:38:30.091348 2137 policy_none.go:47] "Start" Apr 24 23:38:30.091644 kubelet[2137]: I0424 23:38:30.091605 2137 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:38:30.093089 kubelet[2137]: I0424 23:38:30.093057 2137 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:38:30.093089 kubelet[2137]: I0424 23:38:30.093073 2137 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 23:38:30.093089 kubelet[2137]: I0424 23:38:30.093089 2137 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 23:38:30.093296 kubelet[2137]: E0424 23:38:30.093158 2137 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:38:30.094453 kubelet[2137]: E0424 23:38:30.094304 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:38:30.097452 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:38:30.110741 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:38:30.113160 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:38:30.130996 kubelet[2137]: E0424 23:38:30.130916 2137 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:38:30.131169 kubelet[2137]: I0424 23:38:30.131090 2137 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:38:30.131169 kubelet[2137]: I0424 23:38:30.131143 2137 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:38:30.131833 kubelet[2137]: I0424 23:38:30.131311 2137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:38:30.132562 kubelet[2137]: E0424 23:38:30.132229 2137 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:38:30.132562 kubelet[2137]: E0424 23:38:30.132311 2137 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 23:38:30.203019 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 24 23:38:30.230432 kubelet[2137]: E0424 23:38:30.230328 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:30.233181 kubelet[2137]: I0424 23:38:30.233138 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:38:30.233151 systemd[1]: Created slice kubepods-burstable-pod63891635ea6315bd1d5fac1a91d9cebd.slice - libcontainer container kubepods-burstable-pod63891635ea6315bd1d5fac1a91d9cebd.slice. Apr 24 23:38:30.233460 kubelet[2137]: E0424 23:38:30.233393 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 24 23:38:30.234494 kubelet[2137]: E0424 23:38:30.234473 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:30.235846 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 24 23:38:30.237079 kubelet[2137]: E0424 23:38:30.237042 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:30.273710 kubelet[2137]: I0424 23:38:30.273030 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:30.273710 kubelet[2137]: I0424 23:38:30.273070 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63891635ea6315bd1d5fac1a91d9cebd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"63891635ea6315bd1d5fac1a91d9cebd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:30.273710 kubelet[2137]: I0424 23:38:30.273150 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:30.273710 kubelet[2137]: I0424 23:38:30.273167 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:30.273710 kubelet[2137]: I0424 23:38:30.273180 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63891635ea6315bd1d5fac1a91d9cebd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"63891635ea6315bd1d5fac1a91d9cebd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:30.273892 kubelet[2137]: I0424 23:38:30.273225 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63891635ea6315bd1d5fac1a91d9cebd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"63891635ea6315bd1d5fac1a91d9cebd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:30.273892 kubelet[2137]: E0424 23:38:30.273221 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Apr 24 23:38:30.273892 kubelet[2137]: I0424 23:38:30.273241 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:30.273892 kubelet[2137]: I0424 23:38:30.273254 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:30.273892 kubelet[2137]: I0424 23:38:30.273271 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:30.436000 kubelet[2137]: I0424 23:38:30.435858 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:38:30.436398 kubelet[2137]: E0424 23:38:30.436333 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 24 23:38:30.535143 kubelet[2137]: E0424 23:38:30.534954 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:30.536652 containerd[1463]: time="2026-04-24T23:38:30.536589454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:30.536950 kubelet[2137]: E0424 23:38:30.536802 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:30.537276 containerd[1463]: time="2026-04-24T23:38:30.537230874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:63891635ea6315bd1d5fac1a91d9cebd,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:30.539378 kubelet[2137]: E0424 23:38:30.539315 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:30.539615 containerd[1463]: time="2026-04-24T23:38:30.539585614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:30.674694 kubelet[2137]: E0424 23:38:30.674559 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Apr 24 23:38:30.837933 kubelet[2137]: I0424 23:38:30.837818 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:38:30.838264 kubelet[2137]: E0424 23:38:30.838241 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 24 23:38:30.887806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596891754.mount: Deactivated successfully. Apr 24 23:38:30.894563 containerd[1463]: time="2026-04-24T23:38:30.894507314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:38:30.895060 containerd[1463]: time="2026-04-24T23:38:30.895010543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 24 23:38:30.897721 containerd[1463]: time="2026-04-24T23:38:30.897622724Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:38:30.898762 containerd[1463]: time="2026-04-24T23:38:30.898727016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:38:30.899966 containerd[1463]: time="2026-04-24T23:38:30.899923483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:38:30.900739 containerd[1463]: time="2026-04-24T23:38:30.900703596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:38:30.901498 containerd[1463]: time="2026-04-24T23:38:30.901473523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:38:30.902515 containerd[1463]: time="2026-04-24T23:38:30.902489258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:38:30.903085 containerd[1463]: time="2026-04-24T23:38:30.903052327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 366.323407ms" Apr 24 23:38:30.904347 containerd[1463]: time="2026-04-24T23:38:30.904301381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 364.66243ms" Apr 24 23:38:30.907254 containerd[1463]: time="2026-04-24T23:38:30.907213122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 369.936666ms" Apr 24 23:38:31.162427 kubelet[2137]: E0424 23:38:31.161602 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:38:31.186615 kubelet[2137]: E0424 23:38:31.186533 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:38:31.231747 kubelet[2137]: E0424 23:38:31.231654 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:38:31.311276 containerd[1463]: time="2026-04-24T23:38:31.309510652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:31.311276 containerd[1463]: time="2026-04-24T23:38:31.309598000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:31.311276 containerd[1463]: time="2026-04-24T23:38:31.309607795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:31.311276 containerd[1463]: time="2026-04-24T23:38:31.309658871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:31.316102 containerd[1463]: time="2026-04-24T23:38:31.315918519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:31.316457 containerd[1463]: time="2026-04-24T23:38:31.316281794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:31.316457 containerd[1463]: time="2026-04-24T23:38:31.316299461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:31.316457 containerd[1463]: time="2026-04-24T23:38:31.316361024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:31.319158 containerd[1463]: time="2026-04-24T23:38:31.318916883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:31.319158 containerd[1463]: time="2026-04-24T23:38:31.319009260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:31.319416 containerd[1463]: time="2026-04-24T23:38:31.319297917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:31.320859 containerd[1463]: time="2026-04-24T23:38:31.320792366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:31.397780 systemd[1]: Started cri-containerd-0cf7035a754a58b26f36c9bfc6e40e26eafdd27ad3b54839800e17e7a52b4961.scope - libcontainer container 0cf7035a754a58b26f36c9bfc6e40e26eafdd27ad3b54839800e17e7a52b4961. Apr 24 23:38:31.426461 systemd[1]: Started cri-containerd-b199512a096c08d57a2d12b4fedeb1a3729e8bb1874c3b3fb162a95e6b5202e3.scope - libcontainer container b199512a096c08d57a2d12b4fedeb1a3729e8bb1874c3b3fb162a95e6b5202e3. Apr 24 23:38:31.433019 systemd[1]: Started cri-containerd-9fe85203a1cb2fb13bb53e06d71127542f5665938f4c19bf3636bc0520dd8073.scope - libcontainer container 9fe85203a1cb2fb13bb53e06d71127542f5665938f4c19bf3636bc0520dd8073. Apr 24 23:38:31.456606 kubelet[2137]: E0424 23:38:31.456568 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:38:31.476097 kubelet[2137]: E0424 23:38:31.475082 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Apr 24 23:38:31.513302 containerd[1463]: time="2026-04-24T23:38:31.513041882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:63891635ea6315bd1d5fac1a91d9cebd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b199512a096c08d57a2d12b4fedeb1a3729e8bb1874c3b3fb162a95e6b5202e3\"" Apr 24 23:38:31.514464 kubelet[2137]: E0424 23:38:31.514425 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:31.514842 containerd[1463]: time="2026-04-24T23:38:31.514823228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf7035a754a58b26f36c9bfc6e40e26eafdd27ad3b54839800e17e7a52b4961\"" Apr 24 23:38:31.515900 kubelet[2137]: E0424 23:38:31.515776 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:31.521583 containerd[1463]: time="2026-04-24T23:38:31.521558953Z" level=info msg="CreateContainer within sandbox \"0cf7035a754a58b26f36c9bfc6e40e26eafdd27ad3b54839800e17e7a52b4961\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:38:31.522572 containerd[1463]: time="2026-04-24T23:38:31.522322460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fe85203a1cb2fb13bb53e06d71127542f5665938f4c19bf3636bc0520dd8073\"" Apr 24 23:38:31.522572 containerd[1463]: time="2026-04-24T23:38:31.522384719Z" level=info msg="CreateContainer within sandbox \"b199512a096c08d57a2d12b4fedeb1a3729e8bb1874c3b3fb162a95e6b5202e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:38:31.523235 kubelet[2137]: E0424 23:38:31.522911 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:31.526470 containerd[1463]: time="2026-04-24T23:38:31.526410229Z" level=info msg="CreateContainer within sandbox \"9fe85203a1cb2fb13bb53e06d71127542f5665938f4c19bf3636bc0520dd8073\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:38:31.543621 containerd[1463]: time="2026-04-24T23:38:31.543520197Z" level=info msg="CreateContainer within sandbox \"0cf7035a754a58b26f36c9bfc6e40e26eafdd27ad3b54839800e17e7a52b4961\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d6e18f0df2c1f9bc40dfdd69fd6c8c55667d6c883396a75e084da44d6484f4b\"" Apr 24 23:38:31.544844 containerd[1463]: time="2026-04-24T23:38:31.544759991Z" level=info msg="StartContainer for \"7d6e18f0df2c1f9bc40dfdd69fd6c8c55667d6c883396a75e084da44d6484f4b\"" Apr 24 23:38:31.550271 containerd[1463]: time="2026-04-24T23:38:31.550234590Z" level=info msg="CreateContainer within sandbox \"b199512a096c08d57a2d12b4fedeb1a3729e8bb1874c3b3fb162a95e6b5202e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"646a308285f212cb53d2f2deb40719a29f1291e467305c192e2df77affb5c4b6\"" Apr 24 23:38:31.550773 containerd[1463]: time="2026-04-24T23:38:31.550746463Z" level=info msg="StartContainer for \"646a308285f212cb53d2f2deb40719a29f1291e467305c192e2df77affb5c4b6\"" Apr 24 23:38:31.555852 containerd[1463]: time="2026-04-24T23:38:31.555451317Z" level=info msg="CreateContainer within sandbox \"9fe85203a1cb2fb13bb53e06d71127542f5665938f4c19bf3636bc0520dd8073\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f19584dcc19f6a3418f04c6e90a9c0a5e7e4594da8a90c0c9453e8945b39cb67\"" Apr 24 23:38:31.557929 containerd[1463]: time="2026-04-24T23:38:31.557877248Z" level=info msg="StartContainer for \"f19584dcc19f6a3418f04c6e90a9c0a5e7e4594da8a90c0c9453e8945b39cb67\"" Apr 24 23:38:31.578314 systemd[1]: Started cri-containerd-7d6e18f0df2c1f9bc40dfdd69fd6c8c55667d6c883396a75e084da44d6484f4b.scope - libcontainer container 7d6e18f0df2c1f9bc40dfdd69fd6c8c55667d6c883396a75e084da44d6484f4b. Apr 24 23:38:31.582929 systemd[1]: Started cri-containerd-646a308285f212cb53d2f2deb40719a29f1291e467305c192e2df77affb5c4b6.scope - libcontainer container 646a308285f212cb53d2f2deb40719a29f1291e467305c192e2df77affb5c4b6. Apr 24 23:38:31.591372 systemd[1]: Started cri-containerd-f19584dcc19f6a3418f04c6e90a9c0a5e7e4594da8a90c0c9453e8945b39cb67.scope - libcontainer container f19584dcc19f6a3418f04c6e90a9c0a5e7e4594da8a90c0c9453e8945b39cb67. Apr 24 23:38:31.628100 containerd[1463]: time="2026-04-24T23:38:31.628025508Z" level=info msg="StartContainer for \"7d6e18f0df2c1f9bc40dfdd69fd6c8c55667d6c883396a75e084da44d6484f4b\" returns successfully" Apr 24 23:38:31.640739 kubelet[2137]: I0424 23:38:31.640668 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:38:31.641044 kubelet[2137]: E0424 23:38:31.640979 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 24 23:38:31.644764 containerd[1463]: time="2026-04-24T23:38:31.644661203Z" level=info msg="StartContainer for \"646a308285f212cb53d2f2deb40719a29f1291e467305c192e2df77affb5c4b6\" returns successfully" Apr 24 23:38:31.656824 containerd[1463]: time="2026-04-24T23:38:31.656802520Z" level=info msg="StartContainer for \"f19584dcc19f6a3418f04c6e90a9c0a5e7e4594da8a90c0c9453e8945b39cb67\" returns successfully" Apr 24 23:38:32.136826 kubelet[2137]: E0424 23:38:32.136782 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:32.137104 kubelet[2137]: E0424 23:38:32.136898 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:32.140635 kubelet[2137]: E0424 23:38:32.140589 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:32.140810 kubelet[2137]: E0424 23:38:32.140765 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:32.143848 kubelet[2137]: E0424 23:38:32.143810 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:32.144017 kubelet[2137]: E0424 23:38:32.143966 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:33.174653 kubelet[2137]: E0424 23:38:33.174611 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:33.176020 kubelet[2137]: E0424 23:38:33.175856 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:33.176020 kubelet[2137]: E0424 23:38:33.175927 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:33.176020 kubelet[2137]: E0424 23:38:33.175995 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:33.243945 kubelet[2137]: I0424 23:38:33.243855 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:38:33.427428 kubelet[2137]: E0424 23:38:33.427261 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:33.427428 kubelet[2137]: E0424 23:38:33.427427 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:33.755004 kernel: hrtimer: interrupt took 3800119 ns Apr 24 23:38:33.998255 kubelet[2137]: E0424 23:38:33.998222 2137 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 23:38:34.064890 kubelet[2137]: I0424 23:38:34.064854 2137 apiserver.go:52] "Watching apiserver" Apr 24 23:38:34.072892 kubelet[2137]: I0424 23:38:34.072794 2137 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:38:34.162864 kubelet[2137]: E0424 23:38:34.162807 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:38:34.163054 kubelet[2137]: E0424 23:38:34.163003 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:34.309816 kubelet[2137]: I0424 23:38:34.308650 2137 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 23:38:34.372938 kubelet[2137]: I0424 23:38:34.372392 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:34.378094 kubelet[2137]: E0424 23:38:34.378071 2137 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:34.378352 kubelet[2137]: I0424 23:38:34.378247 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:34.380351 kubelet[2137]: E0424 23:38:34.380309 2137 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:34.380351 kubelet[2137]: I0424 23:38:34.380330 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:34.390062 kubelet[2137]: E0424 23:38:34.390012 2137 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:36.139636 systemd[1]: Reloading requested from client PID 2426 ('systemctl') (unit session-7.scope)... Apr 24 23:38:36.139664 systemd[1]: Reloading... Apr 24 23:38:36.203212 zram_generator::config[2465]: No configuration found. Apr 24 23:38:36.344676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:38:36.438232 systemd[1]: Reloading finished in 298 ms. Apr 24 23:38:36.475816 kubelet[2137]: I0424 23:38:36.475636 2137 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:38:36.475733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:36.500275 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:38:36.500513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:36.500574 systemd[1]: kubelet.service: Consumed 1.431s CPU time, 128.1M memory peak, 0B memory swap peak. Apr 24 23:38:36.512363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:38:36.620614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:38:36.624478 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:38:36.730448 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:38:36.730448 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:38:36.730448 kubelet[2510]: I0424 23:38:36.730178 2510 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:38:36.741901 kubelet[2510]: I0424 23:38:36.741423 2510 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 23:38:36.741901 kubelet[2510]: I0424 23:38:36.741452 2510 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:38:36.741901 kubelet[2510]: I0424 23:38:36.741488 2510 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:38:36.741901 kubelet[2510]: I0424 23:38:36.741499 2510 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:38:36.743060 kubelet[2510]: I0424 23:38:36.743031 2510 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:38:36.746303 kubelet[2510]: I0424 23:38:36.746227 2510 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:38:36.750263 kubelet[2510]: I0424 23:38:36.750158 2510 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:38:36.755507 kubelet[2510]: E0424 23:38:36.755460 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:38:36.755507 kubelet[2510]: I0424 23:38:36.755510 2510 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:38:36.759190 kubelet[2510]: I0424 23:38:36.758943 2510 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:38:36.759261 kubelet[2510]: I0424 23:38:36.759231 2510 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:38:36.759414 kubelet[2510]: I0424 23:38:36.759252 2510 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:38:36.759414 kubelet[2510]: I0424 23:38:36.759377 2510 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:38:36.759414 kubelet[2510]: I0424 23:38:36.759384 2510 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 23:38:36.759414 kubelet[2510]: I0424 23:38:36.759403 2510 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:38:36.760157 kubelet[2510]: I0424 23:38:36.759605 2510 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:38:36.760157 kubelet[2510]: I0424 23:38:36.759787 2510 kubelet.go:475] "Attempting to sync node with API server" Apr 24 23:38:36.760157 kubelet[2510]: I0424 23:38:36.759799 2510 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:38:36.760157 kubelet[2510]: I0424 23:38:36.759813 2510 kubelet.go:387] "Adding apiserver pod source" Apr 24 23:38:36.760157 kubelet[2510]: I0424 23:38:36.759824 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:38:36.763652 kubelet[2510]: I0424 23:38:36.762922 2510 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:38:36.763652 kubelet[2510]: I0424 23:38:36.763378 2510 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:38:36.763652 kubelet[2510]: I0424 23:38:36.763399 2510 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:38:36.775168 kubelet[2510]: I0424 23:38:36.775029 2510 server.go:1262] "Started kubelet" Apr 24 23:38:36.775168 kubelet[2510]: I0424 23:38:36.775094 2510 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:38:36.778258 kubelet[2510]: I0424 23:38:36.777568 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:38:36.781145 kubelet[2510]: I0424 23:38:36.781073 2510 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 23:38:36.781194 kubelet[2510]: I0424 23:38:36.777966 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:38:36.781565 kubelet[2510]: I0424 23:38:36.781517 2510 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:38:36.781872 kubelet[2510]: I0424 23:38:36.781863 2510 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:38:36.781908 kubelet[2510]: I0424 23:38:36.780095 2510 server.go:310] "Adding debug handlers to kubelet server" Apr 24 23:38:36.784941 kubelet[2510]: I0424 23:38:36.784869 2510 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:38:36.785197 kubelet[2510]: I0424 23:38:36.785071 2510 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:38:36.786402 kubelet[2510]: I0424 23:38:36.778564 2510 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:38:36.786506 kubelet[2510]: I0424 23:38:36.786496 2510 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:38:36.787280 kubelet[2510]: I0424 23:38:36.787180 2510 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:38:36.790546 kubelet[2510]: I0424 23:38:36.790489 2510 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:38:36.791038 kubelet[2510]: E0424 23:38:36.790995 2510 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:38:36.804534 kubelet[2510]: I0424 23:38:36.804356 2510 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:38:36.806156 kubelet[2510]: I0424 23:38:36.806070 2510 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:38:36.806261 kubelet[2510]: I0424 23:38:36.806162 2510 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 23:38:36.806261 kubelet[2510]: I0424 23:38:36.806187 2510 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 23:38:36.806261 kubelet[2510]: E0424 23:38:36.806227 2510 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:38:36.836425 kubelet[2510]: I0424 23:38:36.836344 2510 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:38:36.836425 kubelet[2510]: I0424 23:38:36.836388 2510 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:38:36.836425 kubelet[2510]: I0424 23:38:36.836429 2510 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:38:36.836589 kubelet[2510]: I0424 23:38:36.836544 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:38:36.836589 kubelet[2510]: I0424 23:38:36.836551 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:38:36.836589 kubelet[2510]: I0424 23:38:36.836565 2510 policy_none.go:49] "None policy: Start" Apr 24 23:38:36.836589 kubelet[2510]: I0424 23:38:36.836572 2510 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:38:36.836589 kubelet[2510]: I0424 23:38:36.836578 2510 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:38:36.836693 kubelet[2510]: I0424 23:38:36.836641 2510 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 23:38:36.836693 kubelet[2510]: I0424 23:38:36.836647 2510 policy_none.go:47] "Start" Apr 24 23:38:36.840702 kubelet[2510]: E0424 23:38:36.840546 2510 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:38:36.840702 kubelet[2510]: I0424 23:38:36.840658 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:38:36.840702 kubelet[2510]: I0424 23:38:36.840666 2510 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:38:36.840903 kubelet[2510]: I0424 23:38:36.840863 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:38:36.844925 kubelet[2510]: E0424 23:38:36.844661 2510 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:38:36.908330 kubelet[2510]: I0424 23:38:36.908304 2510 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:36.908633 kubelet[2510]: I0424 23:38:36.908305 2510 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:36.908777 kubelet[2510]: I0424 23:38:36.908345 2510 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:36.962475 kubelet[2510]: I0424 23:38:36.962352 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 23:38:37.020989 kubelet[2510]: I0424 23:38:37.019075 2510 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 24 23:38:37.020989 kubelet[2510]: I0424 23:38:37.019338 2510 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 23:38:37.083982 kubelet[2510]: I0424 23:38:37.083883 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63891635ea6315bd1d5fac1a91d9cebd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"63891635ea6315bd1d5fac1a91d9cebd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:37.083982 kubelet[2510]: I0424 23:38:37.083954 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.083982 kubelet[2510]: I0424 23:38:37.083972 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63891635ea6315bd1d5fac1a91d9cebd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"63891635ea6315bd1d5fac1a91d9cebd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:37.084229 kubelet[2510]: I0424 23:38:37.084083 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.084229 kubelet[2510]: I0424 23:38:37.084157 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.084229 kubelet[2510]: I0424 23:38:37.084191 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.084229 kubelet[2510]: I0424 23:38:37.084206 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.084229 kubelet[2510]: I0424 23:38:37.084226 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:37.084305 kubelet[2510]: I0424 23:38:37.084239 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63891635ea6315bd1d5fac1a91d9cebd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"63891635ea6315bd1d5fac1a91d9cebd\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:37.215949 kubelet[2510]: E0424 23:38:37.215860 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:37.216088 kubelet[2510]: E0424 23:38:37.216013 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:37.224826 kubelet[2510]: E0424 23:38:37.223525 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:37.763891 kubelet[2510]: I0424 23:38:37.761664 2510 apiserver.go:52] "Watching apiserver" Apr 24 23:38:37.782247 kubelet[2510]: I0424 23:38:37.782178 2510 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:38:37.819428 kubelet[2510]: I0424 23:38:37.819384 2510 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:37.819794 kubelet[2510]: I0424 23:38:37.819753 2510 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:37.820163 kubelet[2510]: I0424 23:38:37.820083 2510 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.825948 kubelet[2510]: E0424 23:38:37.825894 2510 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 23:38:37.826940 kubelet[2510]: E0424 23:38:37.826323 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:37.827419 kubelet[2510]: E0424 23:38:37.827405 2510 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:38:37.827589 kubelet[2510]: E0424 23:38:37.827552 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:37.827700 kubelet[2510]: E0424 23:38:37.827638 2510 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 23:38:37.827700 kubelet[2510]: E0424 23:38:37.827691 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:37.857824 kubelet[2510]: I0424 23:38:37.857690 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.857434532 podStartE2EDuration="1.857434532s" podCreationTimestamp="2026-04-24 23:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:37.857337338 +0000 UTC m=+1.222414005" watchObservedRunningTime="2026-04-24 23:38:37.857434532 +0000 UTC m=+1.222511191" Apr 24 23:38:37.898415 kubelet[2510]: I0424 23:38:37.898278 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8982655130000001 podStartE2EDuration="1.898265513s" podCreationTimestamp="2026-04-24 23:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:37.897495411 +0000 UTC m=+1.262572077" watchObservedRunningTime="2026-04-24 23:38:37.898265513 +0000 UTC m=+1.263342180" Apr 24 23:38:38.821318 kubelet[2510]: E0424 23:38:38.821279 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:38.821632 kubelet[2510]: E0424 23:38:38.821402 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:38.821865 kubelet[2510]: E0424 23:38:38.821801 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:40.173684 kubelet[2510]: E0424 23:38:40.173611 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:41.777638 kubelet[2510]: I0424 23:38:41.777592 2510 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:38:41.778076 containerd[1463]: time="2026-04-24T23:38:41.778038297Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:38:41.778337 kubelet[2510]: I0424 23:38:41.778269 2510 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:38:42.685272 kubelet[2510]: I0424 23:38:42.685191 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.685170844 podStartE2EDuration="6.685170844s" podCreationTimestamp="2026-04-24 23:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:37.907438832 +0000 UTC m=+1.272515510" watchObservedRunningTime="2026-04-24 23:38:42.685170844 +0000 UTC m=+6.050247515" Apr 24 23:38:42.694506 systemd[1]: Created slice kubepods-besteffort-pod5679e242_2f55_4c68_b066_ba91361fde7e.slice - libcontainer container kubepods-besteffort-pod5679e242_2f55_4c68_b066_ba91361fde7e.slice. Apr 24 23:38:42.727619 kubelet[2510]: I0424 23:38:42.727583 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5679e242-2f55-4c68-b066-ba91361fde7e-kube-proxy\") pod \"kube-proxy-skrhn\" (UID: \"5679e242-2f55-4c68-b066-ba91361fde7e\") " pod="kube-system/kube-proxy-skrhn" Apr 24 23:38:42.727619 kubelet[2510]: I0424 23:38:42.727623 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5679e242-2f55-4c68-b066-ba91361fde7e-xtables-lock\") pod \"kube-proxy-skrhn\" (UID: \"5679e242-2f55-4c68-b066-ba91361fde7e\") " pod="kube-system/kube-proxy-skrhn" Apr 24 23:38:42.727619 kubelet[2510]: I0424 23:38:42.727640 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5679e242-2f55-4c68-b066-ba91361fde7e-lib-modules\") pod \"kube-proxy-skrhn\" (UID: \"5679e242-2f55-4c68-b066-ba91361fde7e\") " pod="kube-system/kube-proxy-skrhn" Apr 24 23:38:42.727619 kubelet[2510]: I0424 23:38:42.727653 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhb7q\" (UniqueName: \"kubernetes.io/projected/5679e242-2f55-4c68-b066-ba91361fde7e-kube-api-access-vhb7q\") pod \"kube-proxy-skrhn\" (UID: \"5679e242-2f55-4c68-b066-ba91361fde7e\") " pod="kube-system/kube-proxy-skrhn" Apr 24 23:38:43.004172 systemd[1]: Created slice kubepods-besteffort-pod29f9cb34_23f0_40b8_bbf8_14fb1802e142.slice - libcontainer container kubepods-besteffort-pod29f9cb34_23f0_40b8_bbf8_14fb1802e142.slice. Apr 24 23:38:43.007261 kubelet[2510]: E0424 23:38:43.006764 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:43.009822 containerd[1463]: time="2026-04-24T23:38:43.008831546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skrhn,Uid:5679e242-2f55-4c68-b066-ba91361fde7e,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:43.029304 kubelet[2510]: I0424 23:38:43.029237 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29f9cb34-23f0-40b8-bbf8-14fb1802e142-var-lib-calico\") pod \"tigera-operator-5588576f44-szzln\" (UID: \"29f9cb34-23f0-40b8-bbf8-14fb1802e142\") " pod="tigera-operator/tigera-operator-5588576f44-szzln" Apr 24 23:38:43.029304 kubelet[2510]: I0424 23:38:43.029282 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2xb7\" (UniqueName: \"kubernetes.io/projected/29f9cb34-23f0-40b8-bbf8-14fb1802e142-kube-api-access-v2xb7\") pod \"tigera-operator-5588576f44-szzln\" (UID: \"29f9cb34-23f0-40b8-bbf8-14fb1802e142\") " pod="tigera-operator/tigera-operator-5588576f44-szzln" Apr 24 23:38:43.057060 containerd[1463]: time="2026-04-24T23:38:43.056842775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:43.057060 containerd[1463]: time="2026-04-24T23:38:43.056999207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:43.057060 containerd[1463]: time="2026-04-24T23:38:43.057008297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:43.057236 containerd[1463]: time="2026-04-24T23:38:43.057146866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:43.248341 systemd[1]: Started cri-containerd-82fa13030acad36892e1ce875e25f9c0986d399be7b39f4c36cbb544d5719114.scope - libcontainer container 82fa13030acad36892e1ce875e25f9c0986d399be7b39f4c36cbb544d5719114. Apr 24 23:38:43.289593 containerd[1463]: time="2026-04-24T23:38:43.289497741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skrhn,Uid:5679e242-2f55-4c68-b066-ba91361fde7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"82fa13030acad36892e1ce875e25f9c0986d399be7b39f4c36cbb544d5719114\"" Apr 24 23:38:43.290857 kubelet[2510]: E0424 23:38:43.290524 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:43.296827 containerd[1463]: time="2026-04-24T23:38:43.296751850Z" level=info msg="CreateContainer within sandbox \"82fa13030acad36892e1ce875e25f9c0986d399be7b39f4c36cbb544d5719114\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:38:43.308951 containerd[1463]: time="2026-04-24T23:38:43.308832056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-szzln,Uid:29f9cb34-23f0-40b8-bbf8-14fb1802e142,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:38:43.314011 containerd[1463]: time="2026-04-24T23:38:43.313967468Z" level=info msg="CreateContainer within sandbox \"82fa13030acad36892e1ce875e25f9c0986d399be7b39f4c36cbb544d5719114\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b26aadc40c91b385110aeb3d1b6e23e3bd26aefd5588807ff28a66d638d17ca3\"" Apr 24 23:38:43.317163 containerd[1463]: time="2026-04-24T23:38:43.314568136Z" level=info msg="StartContainer for \"b26aadc40c91b385110aeb3d1b6e23e3bd26aefd5588807ff28a66d638d17ca3\"" Apr 24 23:38:43.349294 systemd[1]: Started cri-containerd-b26aadc40c91b385110aeb3d1b6e23e3bd26aefd5588807ff28a66d638d17ca3.scope - libcontainer container b26aadc40c91b385110aeb3d1b6e23e3bd26aefd5588807ff28a66d638d17ca3. Apr 24 23:38:43.367807 containerd[1463]: time="2026-04-24T23:38:43.367295157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:43.367807 containerd[1463]: time="2026-04-24T23:38:43.367422952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:43.367807 containerd[1463]: time="2026-04-24T23:38:43.367645783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:43.367807 containerd[1463]: time="2026-04-24T23:38:43.367747310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:43.382175 containerd[1463]: time="2026-04-24T23:38:43.380063612Z" level=info msg="StartContainer for \"b26aadc40c91b385110aeb3d1b6e23e3bd26aefd5588807ff28a66d638d17ca3\" returns successfully" Apr 24 23:38:43.408368 systemd[1]: Started cri-containerd-0ba842e3e7cca587cd39e4e4af256da29119d150d3cc3afadefe506dffe0a5c2.scope - libcontainer container 0ba842e3e7cca587cd39e4e4af256da29119d150d3cc3afadefe506dffe0a5c2. Apr 24 23:38:43.493903 containerd[1463]: time="2026-04-24T23:38:43.493875300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-szzln,Uid:29f9cb34-23f0-40b8-bbf8-14fb1802e142,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0ba842e3e7cca587cd39e4e4af256da29119d150d3cc3afadefe506dffe0a5c2\"" Apr 24 23:38:43.499227 containerd[1463]: time="2026-04-24T23:38:43.496854514Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:38:43.830559 kubelet[2510]: E0424 23:38:43.830527 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:44.938885 kubelet[2510]: E0424 23:38:44.938483 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:45.090310 kubelet[2510]: I0424 23:38:45.089916 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-skrhn" podStartSLOduration=3.08990364 podStartE2EDuration="3.08990364s" podCreationTimestamp="2026-04-24 23:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:43.841227062 +0000 UTC m=+7.206303732" watchObservedRunningTime="2026-04-24 23:38:45.08990364 +0000 UTC m=+8.454980309" Apr 24 23:38:45.151389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188344699.mount: Deactivated successfully. Apr 24 23:38:45.835685 kubelet[2510]: E0424 23:38:45.835584 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:46.151373 kubelet[2510]: E0424 23:38:46.151349 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:46.286428 containerd[1463]: time="2026-04-24T23:38:46.286327704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:46.287279 containerd[1463]: time="2026-04-24T23:38:46.287196437Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:38:46.288399 containerd[1463]: time="2026-04-24T23:38:46.288355714Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:46.290600 containerd[1463]: time="2026-04-24T23:38:46.290556864Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:46.291457 containerd[1463]: time="2026-04-24T23:38:46.291381876Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.794477099s" Apr 24 23:38:46.291457 containerd[1463]: time="2026-04-24T23:38:46.291420934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:38:46.298003 containerd[1463]: time="2026-04-24T23:38:46.297891404Z" level=info msg="CreateContainer within sandbox \"0ba842e3e7cca587cd39e4e4af256da29119d150d3cc3afadefe506dffe0a5c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:38:46.311718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171190865.mount: Deactivated successfully. Apr 24 23:38:46.314570 containerd[1463]: time="2026-04-24T23:38:46.314522598Z" level=info msg="CreateContainer within sandbox \"0ba842e3e7cca587cd39e4e4af256da29119d150d3cc3afadefe506dffe0a5c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"160ca2ba01f48fd2d0550b2a181cd9dd82eb62ab1965de6c5cda1d81b968c838\"" Apr 24 23:38:46.315329 containerd[1463]: time="2026-04-24T23:38:46.315301374Z" level=info msg="StartContainer for \"160ca2ba01f48fd2d0550b2a181cd9dd82eb62ab1965de6c5cda1d81b968c838\"" Apr 24 23:38:46.376326 systemd[1]: Started cri-containerd-160ca2ba01f48fd2d0550b2a181cd9dd82eb62ab1965de6c5cda1d81b968c838.scope - libcontainer container 160ca2ba01f48fd2d0550b2a181cd9dd82eb62ab1965de6c5cda1d81b968c838. Apr 24 23:38:46.403577 containerd[1463]: time="2026-04-24T23:38:46.403448821Z" level=info msg="StartContainer for \"160ca2ba01f48fd2d0550b2a181cd9dd82eb62ab1965de6c5cda1d81b968c838\" returns successfully" Apr 24 23:38:46.839001 kubelet[2510]: E0424 23:38:46.838811 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:46.839358 kubelet[2510]: E0424 23:38:46.839317 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:46.858663 kubelet[2510]: I0424 23:38:46.858578 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-szzln" podStartSLOduration=2.061511398 podStartE2EDuration="4.858565704s" podCreationTimestamp="2026-04-24 23:38:42 +0000 UTC" firstStartedPulling="2026-04-24 23:38:43.49596833 +0000 UTC m=+6.861044998" lastFinishedPulling="2026-04-24 23:38:46.293022645 +0000 UTC m=+9.658099304" observedRunningTime="2026-04-24 23:38:46.848486847 +0000 UTC m=+10.213563517" watchObservedRunningTime="2026-04-24 23:38:46.858565704 +0000 UTC m=+10.223642375" Apr 24 23:38:50.180973 kubelet[2510]: E0424 23:38:50.180935 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:51.702729 sudo[1641]: pam_unix(sudo:session): session closed for user root Apr 24 23:38:51.704443 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:51.707416 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:36810.service: Deactivated successfully. Apr 24 23:38:51.710679 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:38:51.710891 systemd[1]: session-7.scope: Consumed 5.899s CPU time, 160.3M memory peak, 0B memory swap peak. Apr 24 23:38:51.711911 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:38:51.716462 systemd-logind[1453]: Removed session 7. Apr 24 23:38:53.285773 update_engine[1457]: I20260424 23:38:53.285655 1457 update_attempter.cc:509] Updating boot flags... Apr 24 23:38:53.314817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2934) Apr 24 23:38:53.361844 systemd[1]: Created slice kubepods-besteffort-poddd5ca44e_1357_466b_ba50_db7a7b97e805.slice - libcontainer container kubepods-besteffort-poddd5ca44e_1357_466b_ba50_db7a7b97e805.slice. Apr 24 23:38:53.392151 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2932) Apr 24 23:38:53.393400 kubelet[2510]: I0424 23:38:53.393340 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4rr\" (UniqueName: \"kubernetes.io/projected/dd5ca44e-1357-466b-ba50-db7a7b97e805-kube-api-access-st4rr\") pod \"calico-typha-96bffc494-w5h94\" (UID: \"dd5ca44e-1357-466b-ba50-db7a7b97e805\") " pod="calico-system/calico-typha-96bffc494-w5h94" Apr 24 23:38:53.393723 kubelet[2510]: I0424 23:38:53.393466 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd5ca44e-1357-466b-ba50-db7a7b97e805-tigera-ca-bundle\") pod \"calico-typha-96bffc494-w5h94\" (UID: \"dd5ca44e-1357-466b-ba50-db7a7b97e805\") " pod="calico-system/calico-typha-96bffc494-w5h94" Apr 24 23:38:53.393723 kubelet[2510]: I0424 23:38:53.393479 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd5ca44e-1357-466b-ba50-db7a7b97e805-typha-certs\") pod \"calico-typha-96bffc494-w5h94\" (UID: \"dd5ca44e-1357-466b-ba50-db7a7b97e805\") " pod="calico-system/calico-typha-96bffc494-w5h94" Apr 24 23:38:53.433396 systemd[1]: Created slice kubepods-besteffort-pod3b406891_ee3c_48c1_87b5_e301f0b4a442.slice - libcontainer container kubepods-besteffort-pod3b406891_ee3c_48c1_87b5_e301f0b4a442.slice. Apr 24 23:38:53.495230 kubelet[2510]: I0424 23:38:53.495188 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3b406891-ee3c-48c1-87b5-e301f0b4a442-node-certs\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.495421 kubelet[2510]: I0424 23:38:53.495298 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-cni-log-dir\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497158 kubelet[2510]: I0424 23:38:53.495318 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-policysync\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497158 kubelet[2510]: I0424 23:38:53.495767 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-cni-bin-dir\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497158 kubelet[2510]: I0424 23:38:53.495781 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-cni-net-dir\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497158 kubelet[2510]: I0424 23:38:53.495871 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-lib-modules\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497158 kubelet[2510]: I0424 23:38:53.495885 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b406891-ee3c-48c1-87b5-e301f0b4a442-tigera-ca-bundle\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497309 kubelet[2510]: I0424 23:38:53.496039 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-var-lib-calico\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497309 kubelet[2510]: I0424 23:38:53.496054 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-flexvol-driver-host\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497309 kubelet[2510]: I0424 23:38:53.496080 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-var-run-calico\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497309 kubelet[2510]: I0424 23:38:53.496162 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-xtables-lock\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497309 kubelet[2510]: I0424 23:38:53.496184 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbpv\" (UniqueName: \"kubernetes.io/projected/3b406891-ee3c-48c1-87b5-e301f0b4a442-kube-api-access-ccbpv\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497388 kubelet[2510]: I0424 23:38:53.496368 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-bpffs\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497388 kubelet[2510]: I0424 23:38:53.496380 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-nodeproc\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.497388 kubelet[2510]: I0424 23:38:53.496700 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3b406891-ee3c-48c1-87b5-e301f0b4a442-sys-fs\") pod \"calico-node-vcwgz\" (UID: \"3b406891-ee3c-48c1-87b5-e301f0b4a442\") " pod="calico-system/calico-node-vcwgz" Apr 24 23:38:53.527919 kubelet[2510]: E0424 23:38:53.527684 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:38:53.598738 kubelet[2510]: I0424 23:38:53.598413 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/432f514d-2771-4770-8cbd-167f3881d2c4-kubelet-dir\") pod \"csi-node-driver-b2j2n\" (UID: \"432f514d-2771-4770-8cbd-167f3881d2c4\") " pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:38:53.598738 kubelet[2510]: I0424 23:38:53.598540 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/432f514d-2771-4770-8cbd-167f3881d2c4-varrun\") pod \"csi-node-driver-b2j2n\" (UID: \"432f514d-2771-4770-8cbd-167f3881d2c4\") " pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:38:53.598738 kubelet[2510]: I0424 23:38:53.598552 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v82g7\" (UniqueName: \"kubernetes.io/projected/432f514d-2771-4770-8cbd-167f3881d2c4-kube-api-access-v82g7\") pod \"csi-node-driver-b2j2n\" (UID: \"432f514d-2771-4770-8cbd-167f3881d2c4\") " pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:38:53.598738 kubelet[2510]: I0424 23:38:53.598572 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/432f514d-2771-4770-8cbd-167f3881d2c4-registration-dir\") pod \"csi-node-driver-b2j2n\" (UID: \"432f514d-2771-4770-8cbd-167f3881d2c4\") " pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:38:53.598738 kubelet[2510]: I0424 23:38:53.598616 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/432f514d-2771-4770-8cbd-167f3881d2c4-socket-dir\") pod \"csi-node-driver-b2j2n\" (UID: \"432f514d-2771-4770-8cbd-167f3881d2c4\") " pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:38:53.605314 kubelet[2510]: E0424 23:38:53.605282 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.605314 kubelet[2510]: W0424 23:38:53.605311 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.605407 kubelet[2510]: E0424 23:38:53.605354 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.607916 kubelet[2510]: E0424 23:38:53.607889 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.607916 kubelet[2510]: W0424 23:38:53.607913 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.607990 kubelet[2510]: E0424 23:38:53.607924 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.668958 kubelet[2510]: E0424 23:38:53.668874 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:53.669685 containerd[1463]: time="2026-04-24T23:38:53.669646096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-96bffc494-w5h94,Uid:dd5ca44e-1357-466b-ba50-db7a7b97e805,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:53.697760 containerd[1463]: time="2026-04-24T23:38:53.696799959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:53.697980 containerd[1463]: time="2026-04-24T23:38:53.697833762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:53.697980 containerd[1463]: time="2026-04-24T23:38:53.697904043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:53.698359 containerd[1463]: time="2026-04-24T23:38:53.698253088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:53.700032 kubelet[2510]: E0424 23:38:53.699980 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.700032 kubelet[2510]: W0424 23:38:53.700010 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.700032 kubelet[2510]: E0424 23:38:53.700025 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.700323 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701794 kubelet[2510]: W0424 23:38:53.700331 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.700338 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.700599 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701794 kubelet[2510]: W0424 23:38:53.700604 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.700610 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.700878 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701794 kubelet[2510]: W0424 23:38:53.700884 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.700890 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701794 kubelet[2510]: E0424 23:38:53.701094 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701987 kubelet[2510]: W0424 23:38:53.701099 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.701987 kubelet[2510]: E0424 23:38:53.701138 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701987 kubelet[2510]: E0424 23:38:53.701437 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701987 kubelet[2510]: W0424 23:38:53.701444 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.701987 kubelet[2510]: E0424 23:38:53.701451 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701987 kubelet[2510]: E0424 23:38:53.701635 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701987 kubelet[2510]: W0424 23:38:53.701640 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.701987 kubelet[2510]: E0424 23:38:53.701646 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.701987 kubelet[2510]: E0424 23:38:53.701834 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.701987 kubelet[2510]: W0424 23:38:53.701839 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.702173 kubelet[2510]: E0424 23:38:53.701847 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.702173 kubelet[2510]: E0424 23:38:53.702029 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.702173 kubelet[2510]: W0424 23:38:53.702034 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.702173 kubelet[2510]: E0424 23:38:53.702039 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.702520 kubelet[2510]: E0424 23:38:53.702250 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.702520 kubelet[2510]: W0424 23:38:53.702255 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.702520 kubelet[2510]: E0424 23:38:53.702260 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.702520 kubelet[2510]: E0424 23:38:53.702424 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.702520 kubelet[2510]: W0424 23:38:53.702429 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.702520 kubelet[2510]: E0424 23:38:53.702434 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.702680 kubelet[2510]: E0424 23:38:53.702645 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.702680 kubelet[2510]: W0424 23:38:53.702665 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.702680 kubelet[2510]: E0424 23:38:53.702672 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.702992 kubelet[2510]: E0424 23:38:53.702953 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.702992 kubelet[2510]: W0424 23:38:53.702972 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.702992 kubelet[2510]: E0424 23:38:53.702978 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.703287 kubelet[2510]: E0424 23:38:53.703243 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.703287 kubelet[2510]: W0424 23:38:53.703269 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.703287 kubelet[2510]: E0424 23:38:53.703284 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.703559 kubelet[2510]: E0424 23:38:53.703514 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.703559 kubelet[2510]: W0424 23:38:53.703533 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.703627 kubelet[2510]: E0424 23:38:53.703596 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.703861 kubelet[2510]: E0424 23:38:53.703830 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.703861 kubelet[2510]: W0424 23:38:53.703852 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.703861 kubelet[2510]: E0424 23:38:53.703859 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.704159 kubelet[2510]: E0424 23:38:53.704064 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.704159 kubelet[2510]: W0424 23:38:53.704083 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.704159 kubelet[2510]: E0424 23:38:53.704089 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.704376 kubelet[2510]: E0424 23:38:53.704340 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.704376 kubelet[2510]: W0424 23:38:53.704358 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.704376 kubelet[2510]: E0424 23:38:53.704364 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.704702 kubelet[2510]: E0424 23:38:53.704661 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.704702 kubelet[2510]: W0424 23:38:53.704684 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.704702 kubelet[2510]: E0424 23:38:53.704694 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.705003 kubelet[2510]: E0424 23:38:53.704968 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.705003 kubelet[2510]: W0424 23:38:53.704987 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.705003 kubelet[2510]: E0424 23:38:53.704994 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.705322 kubelet[2510]: E0424 23:38:53.705293 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.705322 kubelet[2510]: W0424 23:38:53.705313 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.705322 kubelet[2510]: E0424 23:38:53.705320 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.705569 kubelet[2510]: E0424 23:38:53.705532 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.705569 kubelet[2510]: W0424 23:38:53.705551 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.705569 kubelet[2510]: E0424 23:38:53.705556 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.706350 kubelet[2510]: E0424 23:38:53.706306 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.706350 kubelet[2510]: W0424 23:38:53.706326 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.706350 kubelet[2510]: E0424 23:38:53.706334 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.706562 kubelet[2510]: E0424 23:38:53.706531 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.706562 kubelet[2510]: W0424 23:38:53.706550 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.706562 kubelet[2510]: E0424 23:38:53.706556 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.707794 kubelet[2510]: E0424 23:38:53.707770 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.707841 kubelet[2510]: W0424 23:38:53.707795 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.707841 kubelet[2510]: E0424 23:38:53.707804 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.715652 systemd[1]: Started cri-containerd-553502bddcd06ff08a9850ed3c83ca5e8aa260462a2bb6061c016063a4a80414.scope - libcontainer container 553502bddcd06ff08a9850ed3c83ca5e8aa260462a2bb6061c016063a4a80414. Apr 24 23:38:53.716373 kubelet[2510]: E0424 23:38:53.716266 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:53.716373 kubelet[2510]: W0424 23:38:53.716276 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:53.716373 kubelet[2510]: E0424 23:38:53.716285 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:53.739334 containerd[1463]: time="2026-04-24T23:38:53.739281912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vcwgz,Uid:3b406891-ee3c-48c1-87b5-e301f0b4a442,Namespace:calico-system,Attempt:0,}" Apr 24 23:38:53.750626 containerd[1463]: time="2026-04-24T23:38:53.750553602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-96bffc494-w5h94,Uid:dd5ca44e-1357-466b-ba50-db7a7b97e805,Namespace:calico-system,Attempt:0,} returns sandbox id \"553502bddcd06ff08a9850ed3c83ca5e8aa260462a2bb6061c016063a4a80414\"" Apr 24 23:38:53.751293 kubelet[2510]: E0424 23:38:53.751250 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:53.752241 containerd[1463]: time="2026-04-24T23:38:53.752201261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:38:53.788185 containerd[1463]: time="2026-04-24T23:38:53.782819279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:53.788185 containerd[1463]: time="2026-04-24T23:38:53.782939422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:53.788185 containerd[1463]: time="2026-04-24T23:38:53.782948699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:53.788185 containerd[1463]: time="2026-04-24T23:38:53.783086049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:53.808895 systemd[1]: Started cri-containerd-ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8.scope - libcontainer container ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8. Apr 24 23:38:53.833902 containerd[1463]: time="2026-04-24T23:38:53.833801774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vcwgz,Uid:3b406891-ee3c-48c1-87b5-e301f0b4a442,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\"" Apr 24 23:38:54.808019 kubelet[2510]: E0424 23:38:54.807814 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:38:55.652540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536979356.mount: Deactivated successfully. Apr 24 23:38:56.807746 kubelet[2510]: E0424 23:38:56.807604 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:38:57.716863 containerd[1463]: time="2026-04-24T23:38:57.716740805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:57.717674 containerd[1463]: time="2026-04-24T23:38:57.717417823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:38:57.718664 containerd[1463]: time="2026-04-24T23:38:57.718614060Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:57.720842 containerd[1463]: time="2026-04-24T23:38:57.720796141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:57.721294 containerd[1463]: time="2026-04-24T23:38:57.721251287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.969008496s" Apr 24 23:38:57.721326 containerd[1463]: time="2026-04-24T23:38:57.721291321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:38:57.725174 containerd[1463]: time="2026-04-24T23:38:57.723339766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:38:57.739340 containerd[1463]: time="2026-04-24T23:38:57.739251213Z" level=info msg="CreateContainer within sandbox \"553502bddcd06ff08a9850ed3c83ca5e8aa260462a2bb6061c016063a4a80414\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:38:57.752301 containerd[1463]: time="2026-04-24T23:38:57.752247176Z" level=info msg="CreateContainer within sandbox \"553502bddcd06ff08a9850ed3c83ca5e8aa260462a2bb6061c016063a4a80414\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"366e325be2d1e85ac71293c6d6476fa2443c3f1449254120c1140399c04ffb4c\"" Apr 24 23:38:57.753137 containerd[1463]: time="2026-04-24T23:38:57.752732974Z" level=info msg="StartContainer for \"366e325be2d1e85ac71293c6d6476fa2443c3f1449254120c1140399c04ffb4c\"" Apr 24 23:38:57.792339 systemd[1]: Started cri-containerd-366e325be2d1e85ac71293c6d6476fa2443c3f1449254120c1140399c04ffb4c.scope - libcontainer container 366e325be2d1e85ac71293c6d6476fa2443c3f1449254120c1140399c04ffb4c. Apr 24 23:38:57.836150 containerd[1463]: time="2026-04-24T23:38:57.836009716Z" level=info msg="StartContainer for \"366e325be2d1e85ac71293c6d6476fa2443c3f1449254120c1140399c04ffb4c\" returns successfully" Apr 24 23:38:57.882683 kubelet[2510]: E0424 23:38:57.882649 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:57.891848 kubelet[2510]: I0424 23:38:57.891772 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-96bffc494-w5h94" podStartSLOduration=0.921432618 podStartE2EDuration="4.891758892s" podCreationTimestamp="2026-04-24 23:38:53 +0000 UTC" firstStartedPulling="2026-04-24 23:38:53.751893257 +0000 UTC m=+17.116969917" lastFinishedPulling="2026-04-24 23:38:57.722219531 +0000 UTC m=+21.087296191" observedRunningTime="2026-04-24 23:38:57.891259077 +0000 UTC m=+21.256335745" watchObservedRunningTime="2026-04-24 23:38:57.891758892 +0000 UTC m=+21.256835565" Apr 24 23:38:57.894471 kubelet[2510]: E0424 23:38:57.894452 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.894471 kubelet[2510]: W0424 23:38:57.894465 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.894479 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.894761 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.894720 kubelet[2510]: W0424 23:38:57.894768 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.894775 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.894886 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.894720 kubelet[2510]: W0424 23:38:57.894890 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.894896 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.895017 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.894720 kubelet[2510]: W0424 23:38:57.895021 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.894720 kubelet[2510]: E0424 23:38:57.895027 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895195 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897158 kubelet[2510]: W0424 23:38:57.895201 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895207 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895298 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897158 kubelet[2510]: W0424 23:38:57.895302 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895307 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895392 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897158 kubelet[2510]: W0424 23:38:57.895396 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895401 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897158 kubelet[2510]: E0424 23:38:57.895513 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897316 kubelet[2510]: W0424 23:38:57.895519 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897316 kubelet[2510]: E0424 23:38:57.895524 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897316 kubelet[2510]: E0424 23:38:57.895660 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897316 kubelet[2510]: W0424 23:38:57.895665 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897316 kubelet[2510]: E0424 23:38:57.895671 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897316 kubelet[2510]: E0424 23:38:57.895761 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897316 kubelet[2510]: W0424 23:38:57.895765 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897316 kubelet[2510]: E0424 23:38:57.895771 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897316 kubelet[2510]: E0424 23:38:57.895853 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897316 kubelet[2510]: W0424 23:38:57.895857 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.895862 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.895949 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897460 kubelet[2510]: W0424 23:38:57.895952 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.895957 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.896047 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897460 kubelet[2510]: W0424 23:38:57.896051 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.896057 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.896202 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897460 kubelet[2510]: W0424 23:38:57.896207 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897460 kubelet[2510]: E0424 23:38:57.896213 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.897847 kubelet[2510]: E0424 23:38:57.896300 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.897847 kubelet[2510]: W0424 23:38:57.896304 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.897847 kubelet[2510]: E0424 23:38:57.896309 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.946758 kubelet[2510]: E0424 23:38:57.946662 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.946758 kubelet[2510]: W0424 23:38:57.946702 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.946758 kubelet[2510]: E0424 23:38:57.946746 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.947277 kubelet[2510]: E0424 23:38:57.947007 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.947277 kubelet[2510]: W0424 23:38:57.947013 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.947277 kubelet[2510]: E0424 23:38:57.947020 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.947277 kubelet[2510]: E0424 23:38:57.947261 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.947277 kubelet[2510]: W0424 23:38:57.947267 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.947277 kubelet[2510]: E0424 23:38:57.947273 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.947864 kubelet[2510]: E0424 23:38:57.947715 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.947864 kubelet[2510]: W0424 23:38:57.947738 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.947864 kubelet[2510]: E0424 23:38:57.947746 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.948454 kubelet[2510]: E0424 23:38:57.947995 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.948454 kubelet[2510]: W0424 23:38:57.948001 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.948454 kubelet[2510]: E0424 23:38:57.948008 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.949292 kubelet[2510]: E0424 23:38:57.949253 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.949370 kubelet[2510]: W0424 23:38:57.949311 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.949370 kubelet[2510]: E0424 23:38:57.949321 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.949735 kubelet[2510]: E0424 23:38:57.949661 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.949735 kubelet[2510]: W0424 23:38:57.949687 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.949952 kubelet[2510]: E0424 23:38:57.949810 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.952340 kubelet[2510]: E0424 23:38:57.952317 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.952340 kubelet[2510]: W0424 23:38:57.952330 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.952444 kubelet[2510]: E0424 23:38:57.952358 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.952694 kubelet[2510]: E0424 23:38:57.952663 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.952694 kubelet[2510]: W0424 23:38:57.952672 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.952694 kubelet[2510]: E0424 23:38:57.952680 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.952991 kubelet[2510]: E0424 23:38:57.952929 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.952991 kubelet[2510]: W0424 23:38:57.952951 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.952991 kubelet[2510]: E0424 23:38:57.952958 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.953557 kubelet[2510]: E0424 23:38:57.953486 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.953621 kubelet[2510]: W0424 23:38:57.953603 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.953621 kubelet[2510]: E0424 23:38:57.953614 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.954070 kubelet[2510]: E0424 23:38:57.954009 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.954099 kubelet[2510]: W0424 23:38:57.954073 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.954099 kubelet[2510]: E0424 23:38:57.954082 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.955466 kubelet[2510]: E0424 23:38:57.955401 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.955466 kubelet[2510]: W0424 23:38:57.955428 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.955466 kubelet[2510]: E0424 23:38:57.955437 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.955874 kubelet[2510]: E0424 23:38:57.955825 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.955874 kubelet[2510]: W0424 23:38:57.955851 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.955923 kubelet[2510]: E0424 23:38:57.955887 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.956282 kubelet[2510]: E0424 23:38:57.956236 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.956282 kubelet[2510]: W0424 23:38:57.956261 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.956282 kubelet[2510]: E0424 23:38:57.956269 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.958338 kubelet[2510]: E0424 23:38:57.958258 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.958338 kubelet[2510]: W0424 23:38:57.958288 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.958338 kubelet[2510]: E0424 23:38:57.958320 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.960324 kubelet[2510]: E0424 23:38:57.960261 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.960324 kubelet[2510]: W0424 23:38:57.960293 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.960324 kubelet[2510]: E0424 23:38:57.960304 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:57.960751 kubelet[2510]: E0424 23:38:57.960686 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:57.960751 kubelet[2510]: W0424 23:38:57.960715 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:57.960751 kubelet[2510]: E0424 23:38:57.960724 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.807340 kubelet[2510]: E0424 23:38:58.807298 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:38:58.887580 kubelet[2510]: I0424 23:38:58.887466 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:58.888396 kubelet[2510]: E0424 23:38:58.887971 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:38:58.903886 kubelet[2510]: E0424 23:38:58.903771 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.903886 kubelet[2510]: W0424 23:38:58.903823 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.903886 kubelet[2510]: E0424 23:38:58.903840 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.904407 kubelet[2510]: E0424 23:38:58.904041 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.904407 kubelet[2510]: W0424 23:38:58.904047 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.904407 kubelet[2510]: E0424 23:38:58.904054 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.904407 kubelet[2510]: E0424 23:38:58.904289 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.904407 kubelet[2510]: W0424 23:38:58.904295 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.904407 kubelet[2510]: E0424 23:38:58.904302 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.904524 kubelet[2510]: E0424 23:38:58.904487 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.904524 kubelet[2510]: W0424 23:38:58.904492 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.904524 kubelet[2510]: E0424 23:38:58.904498 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.904752 kubelet[2510]: E0424 23:38:58.904726 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.904752 kubelet[2510]: W0424 23:38:58.904745 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.904752 kubelet[2510]: E0424 23:38:58.904751 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.905023 kubelet[2510]: E0424 23:38:58.904994 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.905023 kubelet[2510]: W0424 23:38:58.905021 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.905099 kubelet[2510]: E0424 23:38:58.905036 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.905444 kubelet[2510]: E0424 23:38:58.905422 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.905444 kubelet[2510]: W0424 23:38:58.905444 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.905587 kubelet[2510]: E0424 23:38:58.905453 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.905666 kubelet[2510]: E0424 23:38:58.905645 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.905687 kubelet[2510]: W0424 23:38:58.905667 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.905687 kubelet[2510]: E0424 23:38:58.905675 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.905934 kubelet[2510]: E0424 23:38:58.905915 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.905934 kubelet[2510]: W0424 23:38:58.905933 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.905976 kubelet[2510]: E0424 23:38:58.905941 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.906212 kubelet[2510]: E0424 23:38:58.906193 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.906212 kubelet[2510]: W0424 23:38:58.906210 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.906252 kubelet[2510]: E0424 23:38:58.906216 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.906513 kubelet[2510]: E0424 23:38:58.906472 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.906513 kubelet[2510]: W0424 23:38:58.906502 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.906556 kubelet[2510]: E0424 23:38:58.906513 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.906813 kubelet[2510]: E0424 23:38:58.906792 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.906813 kubelet[2510]: W0424 23:38:58.906810 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.906847 kubelet[2510]: E0424 23:38:58.906817 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.907061 kubelet[2510]: E0424 23:38:58.907040 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.907061 kubelet[2510]: W0424 23:38:58.907057 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.907061 kubelet[2510]: E0424 23:38:58.907063 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.907353 kubelet[2510]: E0424 23:38:58.907332 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.907353 kubelet[2510]: W0424 23:38:58.907351 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.907397 kubelet[2510]: E0424 23:38:58.907359 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.907662 kubelet[2510]: E0424 23:38:58.907644 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.907662 kubelet[2510]: W0424 23:38:58.907660 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.907717 kubelet[2510]: E0424 23:38:58.907666 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.959040 kubelet[2510]: E0424 23:38:58.958938 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.959040 kubelet[2510]: W0424 23:38:58.958968 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.959040 kubelet[2510]: E0424 23:38:58.958990 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.959776 kubelet[2510]: E0424 23:38:58.959287 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.959776 kubelet[2510]: W0424 23:38:58.959294 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.959776 kubelet[2510]: E0424 23:38:58.959303 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.959776 kubelet[2510]: E0424 23:38:58.959722 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.959776 kubelet[2510]: W0424 23:38:58.959735 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.959776 kubelet[2510]: E0424 23:38:58.959748 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.959992 kubelet[2510]: E0424 23:38:58.959971 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.959992 kubelet[2510]: W0424 23:38:58.959990 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.960202 kubelet[2510]: E0424 23:38:58.959999 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.960281 kubelet[2510]: E0424 23:38:58.960258 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.960281 kubelet[2510]: W0424 23:38:58.960276 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.960319 kubelet[2510]: E0424 23:38:58.960282 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.960522 kubelet[2510]: E0424 23:38:58.960503 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.960522 kubelet[2510]: W0424 23:38:58.960520 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.960586 kubelet[2510]: E0424 23:38:58.960526 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.960868 kubelet[2510]: E0424 23:38:58.960844 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.960868 kubelet[2510]: W0424 23:38:58.960863 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.960868 kubelet[2510]: E0424 23:38:58.960871 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.961083 kubelet[2510]: E0424 23:38:58.961061 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.961145 kubelet[2510]: W0424 23:38:58.961084 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.961145 kubelet[2510]: E0424 23:38:58.961095 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.961592 kubelet[2510]: E0424 23:38:58.961527 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.961592 kubelet[2510]: W0424 23:38:58.961551 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.961592 kubelet[2510]: E0424 23:38:58.961582 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.961843 kubelet[2510]: E0424 23:38:58.961823 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.961864 kubelet[2510]: W0424 23:38:58.961844 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.961864 kubelet[2510]: E0424 23:38:58.961852 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.962087 kubelet[2510]: E0424 23:38:58.962069 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.962143 kubelet[2510]: W0424 23:38:58.962088 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.962143 kubelet[2510]: E0424 23:38:58.962095 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.962350 kubelet[2510]: E0424 23:38:58.962332 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.962375 kubelet[2510]: W0424 23:38:58.962352 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.962375 kubelet[2510]: E0424 23:38:58.962359 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.962653 kubelet[2510]: E0424 23:38:58.962634 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.962653 kubelet[2510]: W0424 23:38:58.962652 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.962736 kubelet[2510]: E0424 23:38:58.962660 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.962919 kubelet[2510]: E0424 23:38:58.962902 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.962919 kubelet[2510]: W0424 23:38:58.962918 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.962955 kubelet[2510]: E0424 23:38:58.962924 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.963188 kubelet[2510]: E0424 23:38:58.963168 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.963188 kubelet[2510]: W0424 23:38:58.963185 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.963188 kubelet[2510]: E0424 23:38:58.963191 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.963700 kubelet[2510]: E0424 23:38:58.963500 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.963700 kubelet[2510]: W0424 23:38:58.963520 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.963700 kubelet[2510]: E0424 23:38:58.963527 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.963917 kubelet[2510]: E0424 23:38:58.963889 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.963917 kubelet[2510]: W0424 23:38:58.963912 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.963954 kubelet[2510]: E0424 23:38:58.963922 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:58.964233 kubelet[2510]: E0424 23:38:58.964203 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:38:58.964285 kubelet[2510]: W0424 23:38:58.964266 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:38:58.964489 kubelet[2510]: E0424 23:38:58.964284 2510 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:38:59.074015 containerd[1463]: time="2026-04-24T23:38:59.073869321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:59.074621 containerd[1463]: time="2026-04-24T23:38:59.074586659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:38:59.075918 containerd[1463]: time="2026-04-24T23:38:59.075884890Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:59.078003 containerd[1463]: time="2026-04-24T23:38:59.077951527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:59.078493 containerd[1463]: time="2026-04-24T23:38:59.078465060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.355102209s" Apr 24 23:38:59.078534 containerd[1463]: time="2026-04-24T23:38:59.078499396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:38:59.082661 containerd[1463]: time="2026-04-24T23:38:59.082629670Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:38:59.102265 containerd[1463]: time="2026-04-24T23:38:59.102093684Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61\"" Apr 24 23:38:59.103269 containerd[1463]: time="2026-04-24T23:38:59.103201591Z" level=info msg="StartContainer for \"c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61\"" Apr 24 23:38:59.142303 systemd[1]: Started cri-containerd-c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61.scope - libcontainer container c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61. Apr 24 23:38:59.182430 systemd[1]: cri-containerd-c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61.scope: Deactivated successfully. Apr 24 23:38:59.201680 containerd[1463]: time="2026-04-24T23:38:59.201617373Z" level=info msg="StartContainer for \"c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61\" returns successfully" Apr 24 23:38:59.227354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61-rootfs.mount: Deactivated successfully. Apr 24 23:38:59.234653 containerd[1463]: time="2026-04-24T23:38:59.234472856Z" level=info msg="shim disconnected" id=c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61 namespace=k8s.io Apr 24 23:38:59.234653 containerd[1463]: time="2026-04-24T23:38:59.234587425Z" level=warning msg="cleaning up after shim disconnected" id=c8aa25a2204d1f30ff0d5442f4de5feabc906b81b4461877c4b9bf7129fc1e61 namespace=k8s.io Apr 24 23:38:59.234653 containerd[1463]: time="2026-04-24T23:38:59.234617305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:59.891916 containerd[1463]: time="2026-04-24T23:38:59.891880923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:39:00.810885 kubelet[2510]: E0424 23:39:00.810755 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:02.807849 kubelet[2510]: E0424 23:39:02.807763 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:03.845599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373982117.mount: Deactivated successfully. Apr 24 23:39:04.043586 containerd[1463]: time="2026-04-24T23:39:04.043447904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:04.044847 containerd[1463]: time="2026-04-24T23:39:04.044344976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:39:04.045770 containerd[1463]: time="2026-04-24T23:39:04.045736733Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:04.048485 containerd[1463]: time="2026-04-24T23:39:04.048352355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:04.049167 containerd[1463]: time="2026-04-24T23:39:04.049072651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.157148888s" Apr 24 23:39:04.049204 containerd[1463]: time="2026-04-24T23:39:04.049164268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:39:04.057648 containerd[1463]: time="2026-04-24T23:39:04.057498896Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:39:04.124555 containerd[1463]: time="2026-04-24T23:39:04.124341241Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76\"" Apr 24 23:39:04.125236 containerd[1463]: time="2026-04-24T23:39:04.125095508Z" level=info msg="StartContainer for \"04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76\"" Apr 24 23:39:04.180277 systemd[1]: Started cri-containerd-04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76.scope - libcontainer container 04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76. Apr 24 23:39:04.203620 containerd[1463]: time="2026-04-24T23:39:04.203516208Z" level=info msg="StartContainer for \"04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76\" returns successfully" Apr 24 23:39:04.243341 systemd[1]: cri-containerd-04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76.scope: Deactivated successfully. Apr 24 23:39:04.351710 containerd[1463]: time="2026-04-24T23:39:04.351587649Z" level=info msg="shim disconnected" id=04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76 namespace=k8s.io Apr 24 23:39:04.351710 containerd[1463]: time="2026-04-24T23:39:04.351659199Z" level=warning msg="cleaning up after shim disconnected" id=04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76 namespace=k8s.io Apr 24 23:39:04.351710 containerd[1463]: time="2026-04-24T23:39:04.351668340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:04.810834 kubelet[2510]: E0424 23:39:04.810731 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:04.846655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04ee460b1c59cbe310a1f58217310393fba6c6d5dc7ad43e9cdb7d4d17f51c76-rootfs.mount: Deactivated successfully. Apr 24 23:39:04.915343 containerd[1463]: time="2026-04-24T23:39:04.915186883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:39:05.858250 kubelet[2510]: I0424 23:39:05.857999 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:39:05.859034 kubelet[2510]: E0424 23:39:05.858650 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:05.915026 kubelet[2510]: E0424 23:39:05.914923 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:06.811204 kubelet[2510]: E0424 23:39:06.810759 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:08.808323 kubelet[2510]: E0424 23:39:08.808248 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:09.445253 containerd[1463]: time="2026-04-24T23:39:09.445156950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:09.445834 containerd[1463]: time="2026-04-24T23:39:09.445717568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:39:09.446593 containerd[1463]: time="2026-04-24T23:39:09.446520248Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:09.449187 containerd[1463]: time="2026-04-24T23:39:09.449152507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:09.449869 containerd[1463]: time="2026-04-24T23:39:09.449819708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.534582094s" Apr 24 23:39:09.449869 containerd[1463]: time="2026-04-24T23:39:09.449865247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:39:09.456360 containerd[1463]: time="2026-04-24T23:39:09.456239553Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:39:09.474804 containerd[1463]: time="2026-04-24T23:39:09.474696242Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d\"" Apr 24 23:39:09.475917 containerd[1463]: time="2026-04-24T23:39:09.475814706Z" level=info msg="StartContainer for \"af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d\"" Apr 24 23:39:09.523388 systemd[1]: Started cri-containerd-af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d.scope - libcontainer container af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d. Apr 24 23:39:09.555079 containerd[1463]: time="2026-04-24T23:39:09.554978784Z" level=info msg="StartContainer for \"af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d\" returns successfully" Apr 24 23:39:10.132012 systemd[1]: cri-containerd-af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d.scope: Deactivated successfully. Apr 24 23:39:10.157631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d-rootfs.mount: Deactivated successfully. Apr 24 23:39:10.188817 containerd[1463]: time="2026-04-24T23:39:10.188618967Z" level=info msg="shim disconnected" id=af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d namespace=k8s.io Apr 24 23:39:10.188817 containerd[1463]: time="2026-04-24T23:39:10.188685489Z" level=warning msg="cleaning up after shim disconnected" id=af220038d44cd4d9dcb460eab9a624467aa70d7b1a76c3040f9c8c46b8e4a83d namespace=k8s.io Apr 24 23:39:10.188817 containerd[1463]: time="2026-04-24T23:39:10.188691910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:10.190625 kubelet[2510]: I0424 23:39:10.189455 2510 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 24 23:39:10.248540 systemd[1]: Created slice kubepods-besteffort-pod90155a0c_e2a3_4e4e_bd3b_19b850d41c9a.slice - libcontainer container kubepods-besteffort-pod90155a0c_e2a3_4e4e_bd3b_19b850d41c9a.slice. Apr 24 23:39:10.261028 systemd[1]: Created slice kubepods-burstable-pod8ef9041b_388f_45eb_96a4_401487c8a29a.slice - libcontainer container kubepods-burstable-pod8ef9041b_388f_45eb_96a4_401487c8a29a.slice. Apr 24 23:39:10.267276 systemd[1]: Created slice kubepods-besteffort-pod930956ca_d26d_4366_9b81_420c40db0a9a.slice - libcontainer container kubepods-besteffort-pod930956ca_d26d_4366_9b81_420c40db0a9a.slice. Apr 24 23:39:10.273334 systemd[1]: Created slice kubepods-besteffort-pod552c601a_b830_4d6c_90dc_907cfec7edbf.slice - libcontainer container kubepods-besteffort-pod552c601a_b830_4d6c_90dc_907cfec7edbf.slice. Apr 24 23:39:10.281074 systemd[1]: Created slice kubepods-burstable-pod67600210_1e44_4182_a447_6bd334f7adf6.slice - libcontainer container kubepods-burstable-pod67600210_1e44_4182_a447_6bd334f7adf6.slice. Apr 24 23:39:10.284381 systemd[1]: Created slice kubepods-besteffort-podb38ee6f1_72cb_4a71_aae4_824398193815.slice - libcontainer container kubepods-besteffort-podb38ee6f1_72cb_4a71_aae4_824398193815.slice. Apr 24 23:39:10.288770 systemd[1]: Created slice kubepods-besteffort-podd2fbecdb_999b_4ff1_ac90_f81a5cfb1384.slice - libcontainer container kubepods-besteffort-podd2fbecdb_999b_4ff1_ac90_f81a5cfb1384.slice. Apr 24 23:39:10.403545 kubelet[2510]: I0424 23:39:10.403355 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-nginx-config\") pod \"whisker-7bffb8c7cd-rxlxx\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " pod="calico-system/whisker-7bffb8c7cd-rxlxx" Apr 24 23:39:10.403545 kubelet[2510]: I0424 23:39:10.403400 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4tns\" (UniqueName: \"kubernetes.io/projected/67600210-1e44-4182-a447-6bd334f7adf6-kube-api-access-p4tns\") pod \"coredns-66bc5c9577-htcpb\" (UID: \"67600210-1e44-4182-a447-6bd334f7adf6\") " pod="kube-system/coredns-66bc5c9577-htcpb" Apr 24 23:39:10.403545 kubelet[2510]: I0424 23:39:10.403416 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90155a0c-e2a3-4e4e-bd3b-19b850d41c9a-tigera-ca-bundle\") pod \"calico-kube-controllers-7874b6c748-zjlx4\" (UID: \"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a\") " pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" Apr 24 23:39:10.403545 kubelet[2510]: I0424 23:39:10.403430 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-ca-bundle\") pod \"whisker-7bffb8c7cd-rxlxx\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " pod="calico-system/whisker-7bffb8c7cd-rxlxx" Apr 24 23:39:10.403545 kubelet[2510]: I0424 23:39:10.403444 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/552c601a-b830-4d6c-90dc-907cfec7edbf-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-7nrvs\" (UID: \"552c601a-b830-4d6c-90dc-907cfec7edbf\") " pod="calico-system/goldmane-cccfbd5cf-7nrvs" Apr 24 23:39:10.403782 kubelet[2510]: I0424 23:39:10.403455 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgrd\" (UniqueName: \"kubernetes.io/projected/552c601a-b830-4d6c-90dc-907cfec7edbf-kube-api-access-jjgrd\") pod \"goldmane-cccfbd5cf-7nrvs\" (UID: \"552c601a-b830-4d6c-90dc-907cfec7edbf\") " pod="calico-system/goldmane-cccfbd5cf-7nrvs" Apr 24 23:39:10.403782 kubelet[2510]: I0424 23:39:10.403505 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdfwx\" (UniqueName: \"kubernetes.io/projected/b38ee6f1-72cb-4a71-aae4-824398193815-kube-api-access-xdfwx\") pod \"whisker-7bffb8c7cd-rxlxx\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " pod="calico-system/whisker-7bffb8c7cd-rxlxx" Apr 24 23:39:10.403782 kubelet[2510]: I0424 23:39:10.403537 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2fbecdb-999b-4ff1-ac90-f81a5cfb1384-calico-apiserver-certs\") pod \"calico-apiserver-88785c5b9-pnfsw\" (UID: \"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384\") " pod="calico-system/calico-apiserver-88785c5b9-pnfsw" Apr 24 23:39:10.403782 kubelet[2510]: I0424 23:39:10.403556 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtc9n\" (UniqueName: \"kubernetes.io/projected/90155a0c-e2a3-4e4e-bd3b-19b850d41c9a-kube-api-access-dtc9n\") pod \"calico-kube-controllers-7874b6c748-zjlx4\" (UID: \"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a\") " pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" Apr 24 23:39:10.403782 kubelet[2510]: I0424 23:39:10.403606 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552c601a-b830-4d6c-90dc-907cfec7edbf-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-7nrvs\" (UID: \"552c601a-b830-4d6c-90dc-907cfec7edbf\") " pod="calico-system/goldmane-cccfbd5cf-7nrvs" Apr 24 23:39:10.403867 kubelet[2510]: I0424 23:39:10.403623 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67600210-1e44-4182-a447-6bd334f7adf6-config-volume\") pod \"coredns-66bc5c9577-htcpb\" (UID: \"67600210-1e44-4182-a447-6bd334f7adf6\") " pod="kube-system/coredns-66bc5c9577-htcpb" Apr 24 23:39:10.403867 kubelet[2510]: I0424 23:39:10.403641 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztl5v\" (UniqueName: \"kubernetes.io/projected/d2fbecdb-999b-4ff1-ac90-f81a5cfb1384-kube-api-access-ztl5v\") pod \"calico-apiserver-88785c5b9-pnfsw\" (UID: \"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384\") " pod="calico-system/calico-apiserver-88785c5b9-pnfsw" Apr 24 23:39:10.403867 kubelet[2510]: I0424 23:39:10.403658 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kblvb\" (UniqueName: \"kubernetes.io/projected/8ef9041b-388f-45eb-96a4-401487c8a29a-kube-api-access-kblvb\") pod \"coredns-66bc5c9577-4kc84\" (UID: \"8ef9041b-388f-45eb-96a4-401487c8a29a\") " pod="kube-system/coredns-66bc5c9577-4kc84" Apr 24 23:39:10.403867 kubelet[2510]: I0424 23:39:10.403676 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-backend-key-pair\") pod \"whisker-7bffb8c7cd-rxlxx\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " pod="calico-system/whisker-7bffb8c7cd-rxlxx" Apr 24 23:39:10.403867 kubelet[2510]: I0424 23:39:10.403691 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552c601a-b830-4d6c-90dc-907cfec7edbf-config\") pod \"goldmane-cccfbd5cf-7nrvs\" (UID: \"552c601a-b830-4d6c-90dc-907cfec7edbf\") " pod="calico-system/goldmane-cccfbd5cf-7nrvs" Apr 24 23:39:10.403945 kubelet[2510]: I0424 23:39:10.403705 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef9041b-388f-45eb-96a4-401487c8a29a-config-volume\") pod \"coredns-66bc5c9577-4kc84\" (UID: \"8ef9041b-388f-45eb-96a4-401487c8a29a\") " pod="kube-system/coredns-66bc5c9577-4kc84" Apr 24 23:39:10.403945 kubelet[2510]: I0424 23:39:10.403720 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/930956ca-d26d-4366-9b81-420c40db0a9a-calico-apiserver-certs\") pod \"calico-apiserver-88785c5b9-j7rvq\" (UID: \"930956ca-d26d-4366-9b81-420c40db0a9a\") " pod="calico-system/calico-apiserver-88785c5b9-j7rvq" Apr 24 23:39:10.403945 kubelet[2510]: I0424 23:39:10.403733 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg2qg\" (UniqueName: \"kubernetes.io/projected/930956ca-d26d-4366-9b81-420c40db0a9a-kube-api-access-cg2qg\") pod \"calico-apiserver-88785c5b9-j7rvq\" (UID: \"930956ca-d26d-4366-9b81-420c40db0a9a\") " pod="calico-system/calico-apiserver-88785c5b9-j7rvq" Apr 24 23:39:10.566000 containerd[1463]: time="2026-04-24T23:39:10.565893478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7874b6c748-zjlx4,Uid:90155a0c-e2a3-4e4e-bd3b-19b850d41c9a,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:10.567327 kubelet[2510]: E0424 23:39:10.567249 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:10.568329 containerd[1463]: time="2026-04-24T23:39:10.568298753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4kc84,Uid:8ef9041b-388f-45eb-96a4-401487c8a29a,Namespace:kube-system,Attempt:0,}" Apr 24 23:39:10.577162 containerd[1463]: time="2026-04-24T23:39:10.576185225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-j7rvq,Uid:930956ca-d26d-4366-9b81-420c40db0a9a,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:10.581650 containerd[1463]: time="2026-04-24T23:39:10.581523661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7nrvs,Uid:552c601a-b830-4d6c-90dc-907cfec7edbf,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:10.585917 kubelet[2510]: E0424 23:39:10.585890 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:10.586729 containerd[1463]: time="2026-04-24T23:39:10.586693817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-htcpb,Uid:67600210-1e44-4182-a447-6bd334f7adf6,Namespace:kube-system,Attempt:0,}" Apr 24 23:39:10.588785 containerd[1463]: time="2026-04-24T23:39:10.588561111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bffb8c7cd-rxlxx,Uid:b38ee6f1-72cb-4a71-aae4-824398193815,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:10.593301 containerd[1463]: time="2026-04-24T23:39:10.593262571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-pnfsw,Uid:d2fbecdb-999b-4ff1-ac90-f81a5cfb1384,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:10.758241 containerd[1463]: time="2026-04-24T23:39:10.757728185Z" level=error msg="Failed to destroy network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.759337 containerd[1463]: time="2026-04-24T23:39:10.758430204Z" level=error msg="encountered an error cleaning up failed sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.759337 containerd[1463]: time="2026-04-24T23:39:10.758474076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4kc84,Uid:8ef9041b-388f-45eb-96a4-401487c8a29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.766713 kubelet[2510]: E0424 23:39:10.766529 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.767357 kubelet[2510]: E0424 23:39:10.767322 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4kc84" Apr 24 23:39:10.767388 kubelet[2510]: E0424 23:39:10.767364 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4kc84" Apr 24 23:39:10.770159 kubelet[2510]: E0424 23:39:10.767455 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4kc84_kube-system(8ef9041b-388f-45eb-96a4-401487c8a29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4kc84_kube-system(8ef9041b-388f-45eb-96a4-401487c8a29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4kc84" podUID="8ef9041b-388f-45eb-96a4-401487c8a29a" Apr 24 23:39:10.774675 containerd[1463]: time="2026-04-24T23:39:10.773752445Z" level=error msg="Failed to destroy network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.774675 containerd[1463]: time="2026-04-24T23:39:10.774242567Z" level=error msg="encountered an error cleaning up failed sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.774675 containerd[1463]: time="2026-04-24T23:39:10.774361230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-j7rvq,Uid:930956ca-d26d-4366-9b81-420c40db0a9a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.777220 kubelet[2510]: E0424 23:39:10.776099 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.777220 kubelet[2510]: E0424 23:39:10.776941 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88785c5b9-j7rvq" Apr 24 23:39:10.777220 kubelet[2510]: E0424 23:39:10.776988 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88785c5b9-j7rvq" Apr 24 23:39:10.777374 kubelet[2510]: E0424 23:39:10.777188 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88785c5b9-j7rvq_calico-system(930956ca-d26d-4366-9b81-420c40db0a9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88785c5b9-j7rvq_calico-system(930956ca-d26d-4366-9b81-420c40db0a9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-88785c5b9-j7rvq" podUID="930956ca-d26d-4366-9b81-420c40db0a9a" Apr 24 23:39:10.782810 containerd[1463]: time="2026-04-24T23:39:10.782726009Z" level=error msg="Failed to destroy network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.784164 containerd[1463]: time="2026-04-24T23:39:10.783369434Z" level=error msg="encountered an error cleaning up failed sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.784164 containerd[1463]: time="2026-04-24T23:39:10.783478713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-htcpb,Uid:67600210-1e44-4182-a447-6bd334f7adf6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.784803 kubelet[2510]: E0424 23:39:10.784678 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.784899 kubelet[2510]: E0424 23:39:10.784828 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-htcpb" Apr 24 23:39:10.785403 kubelet[2510]: E0424 23:39:10.785216 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-htcpb" Apr 24 23:39:10.785403 kubelet[2510]: E0424 23:39:10.785338 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-htcpb_kube-system(67600210-1e44-4182-a447-6bd334f7adf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-htcpb_kube-system(67600210-1e44-4182-a447-6bd334f7adf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-htcpb" podUID="67600210-1e44-4182-a447-6bd334f7adf6" Apr 24 23:39:10.794380 containerd[1463]: time="2026-04-24T23:39:10.794226694Z" level=error msg="Failed to destroy network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.794886 containerd[1463]: time="2026-04-24T23:39:10.794449746Z" level=error msg="Failed to destroy network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.795075 containerd[1463]: time="2026-04-24T23:39:10.794940369Z" level=error msg="encountered an error cleaning up failed sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.795186 containerd[1463]: time="2026-04-24T23:39:10.795165073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bffb8c7cd-rxlxx,Uid:b38ee6f1-72cb-4a71-aae4-824398193815,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.796042 kubelet[2510]: E0424 23:39:10.795880 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.796090 kubelet[2510]: E0424 23:39:10.796048 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bffb8c7cd-rxlxx" Apr 24 23:39:10.796090 kubelet[2510]: E0424 23:39:10.796066 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bffb8c7cd-rxlxx" Apr 24 23:39:10.796214 kubelet[2510]: E0424 23:39:10.796164 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bffb8c7cd-rxlxx_calico-system(b38ee6f1-72cb-4a71-aae4-824398193815)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bffb8c7cd-rxlxx_calico-system(b38ee6f1-72cb-4a71-aae4-824398193815)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bffb8c7cd-rxlxx" podUID="b38ee6f1-72cb-4a71-aae4-824398193815" Apr 24 23:39:10.796966 containerd[1463]: time="2026-04-24T23:39:10.796297139Z" level=error msg="encountered an error cleaning up failed sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.796966 containerd[1463]: time="2026-04-24T23:39:10.796907790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7874b6c748-zjlx4,Uid:90155a0c-e2a3-4e4e-bd3b-19b850d41c9a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.797457 kubelet[2510]: E0424 23:39:10.797385 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.797536 kubelet[2510]: E0424 23:39:10.797512 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" Apr 24 23:39:10.797536 kubelet[2510]: E0424 23:39:10.797525 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" Apr 24 23:39:10.797788 kubelet[2510]: E0424 23:39:10.797632 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7874b6c748-zjlx4_calico-system(90155a0c-e2a3-4e4e-bd3b-19b850d41c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7874b6c748-zjlx4_calico-system(90155a0c-e2a3-4e4e-bd3b-19b850d41c9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" podUID="90155a0c-e2a3-4e4e-bd3b-19b850d41c9a" Apr 24 23:39:10.814250 systemd[1]: Created slice kubepods-besteffort-pod432f514d_2771_4770_8cbd_167f3881d2c4.slice - libcontainer container kubepods-besteffort-pod432f514d_2771_4770_8cbd_167f3881d2c4.slice. Apr 24 23:39:10.814826 containerd[1463]: time="2026-04-24T23:39:10.814371186Z" level=error msg="Failed to destroy network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.814860 containerd[1463]: time="2026-04-24T23:39:10.814824765Z" level=error msg="encountered an error cleaning up failed sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.815182 containerd[1463]: time="2026-04-24T23:39:10.814868642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7nrvs,Uid:552c601a-b830-4d6c-90dc-907cfec7edbf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.815365 kubelet[2510]: E0424 23:39:10.815065 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.815365 kubelet[2510]: E0424 23:39:10.815209 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-7nrvs" Apr 24 23:39:10.815365 kubelet[2510]: E0424 23:39:10.815266 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-7nrvs" Apr 24 23:39:10.815432 kubelet[2510]: E0424 23:39:10.815363 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-7nrvs_calico-system(552c601a-b830-4d6c-90dc-907cfec7edbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-7nrvs_calico-system(552c601a-b830-4d6c-90dc-907cfec7edbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-7nrvs" podUID="552c601a-b830-4d6c-90dc-907cfec7edbf" Apr 24 23:39:10.820811 containerd[1463]: time="2026-04-24T23:39:10.820667528Z" level=error msg="Failed to destroy network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.821291 containerd[1463]: time="2026-04-24T23:39:10.820877332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2j2n,Uid:432f514d-2771-4770-8cbd-167f3881d2c4,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:10.821291 containerd[1463]: time="2026-04-24T23:39:10.821033378Z" level=error msg="encountered an error cleaning up failed sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.821291 containerd[1463]: time="2026-04-24T23:39:10.821161532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-pnfsw,Uid:d2fbecdb-999b-4ff1-ac90-f81a5cfb1384,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.821512 kubelet[2510]: E0424 23:39:10.821403 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.821688 kubelet[2510]: E0424 23:39:10.821480 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88785c5b9-pnfsw" Apr 24 23:39:10.821688 kubelet[2510]: E0424 23:39:10.821534 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-88785c5b9-pnfsw" Apr 24 23:39:10.821688 kubelet[2510]: E0424 23:39:10.821616 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88785c5b9-pnfsw_calico-system(d2fbecdb-999b-4ff1-ac90-f81a5cfb1384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88785c5b9-pnfsw_calico-system(d2fbecdb-999b-4ff1-ac90-f81a5cfb1384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-88785c5b9-pnfsw" podUID="d2fbecdb-999b-4ff1-ac90-f81a5cfb1384" Apr 24 23:39:10.894865 containerd[1463]: time="2026-04-24T23:39:10.894761063Z" level=error msg="Failed to destroy network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.895417 containerd[1463]: time="2026-04-24T23:39:10.895158710Z" level=error msg="encountered an error cleaning up failed sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.895417 containerd[1463]: time="2026-04-24T23:39:10.895231266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2j2n,Uid:432f514d-2771-4770-8cbd-167f3881d2c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.896009 kubelet[2510]: E0424 23:39:10.895845 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:10.896157 kubelet[2510]: E0424 23:39:10.896045 2510 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:39:10.896157 kubelet[2510]: E0424 23:39:10.896063 2510 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2j2n" Apr 24 23:39:10.896231 kubelet[2510]: E0424 23:39:10.896194 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b2j2n_calico-system(432f514d-2771-4770-8cbd-167f3881d2c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b2j2n_calico-system(432f514d-2771-4770-8cbd-167f3881d2c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:10.940377 kubelet[2510]: I0424 23:39:10.940255 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:10.942197 kubelet[2510]: I0424 23:39:10.941551 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:10.954755 kubelet[2510]: I0424 23:39:10.954734 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:10.963153 containerd[1463]: time="2026-04-24T23:39:10.960179329Z" level=info msg="StopPodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\"" Apr 24 23:39:10.963153 containerd[1463]: time="2026-04-24T23:39:10.960620155Z" level=info msg="StopPodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\"" Apr 24 23:39:10.963153 containerd[1463]: time="2026-04-24T23:39:10.961461709Z" level=info msg="StopPodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\"" Apr 24 23:39:10.966929 containerd[1463]: time="2026-04-24T23:39:10.964154464Z" level=info msg="Ensure that sandbox cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25 in task-service has been cleanup successfully" Apr 24 23:39:10.966929 containerd[1463]: time="2026-04-24T23:39:10.966785691Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:39:10.967017 kubelet[2510]: I0424 23:39:10.964824 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:10.967017 kubelet[2510]: I0424 23:39:10.966003 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:10.967101 containerd[1463]: time="2026-04-24T23:39:10.966954278Z" level=info msg="StopPodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\"" Apr 24 23:39:10.967432 containerd[1463]: time="2026-04-24T23:39:10.967174471Z" level=info msg="Ensure that sandbox f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429 in task-service has been cleanup successfully" Apr 24 23:39:10.967432 containerd[1463]: time="2026-04-24T23:39:10.967216151Z" level=info msg="Ensure that sandbox cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5 in task-service has been cleanup successfully" Apr 24 23:39:10.967544 containerd[1463]: time="2026-04-24T23:39:10.967512047Z" level=info msg="Ensure that sandbox fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c in task-service has been cleanup successfully" Apr 24 23:39:10.970438 containerd[1463]: time="2026-04-24T23:39:10.970378073Z" level=info msg="StopPodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\"" Apr 24 23:39:10.971301 containerd[1463]: time="2026-04-24T23:39:10.971234408Z" level=info msg="Ensure that sandbox 56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002 in task-service has been cleanup successfully" Apr 24 23:39:10.976707 kubelet[2510]: I0424 23:39:10.976651 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:10.980994 containerd[1463]: time="2026-04-24T23:39:10.980922728Z" level=info msg="StopPodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\"" Apr 24 23:39:10.981778 containerd[1463]: time="2026-04-24T23:39:10.981762439Z" level=info msg="Ensure that sandbox b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994 in task-service has been cleanup successfully" Apr 24 23:39:10.987468 kubelet[2510]: I0424 23:39:10.987394 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:10.989329 containerd[1463]: time="2026-04-24T23:39:10.989310607Z" level=info msg="StopPodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\"" Apr 24 23:39:10.990243 kubelet[2510]: I0424 23:39:10.990179 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:10.990318 containerd[1463]: time="2026-04-24T23:39:10.990203411Z" level=info msg="Ensure that sandbox b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833 in task-service has been cleanup successfully" Apr 24 23:39:10.990717 containerd[1463]: time="2026-04-24T23:39:10.990621290Z" level=info msg="StopPodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\"" Apr 24 23:39:10.990751 containerd[1463]: time="2026-04-24T23:39:10.990732721Z" level=info msg="Ensure that sandbox 2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8 in task-service has been cleanup successfully" Apr 24 23:39:11.016220 containerd[1463]: time="2026-04-24T23:39:11.015940958Z" level=info msg="CreateContainer within sandbox \"ba57b99f1e5b29df59a29b0e2e658dd6f1fdd70a66c17307523f9838abebaef8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e2ea2fa68e0bab65b3d574828c24c6b62dc27accff749742412cb8657c73b761\"" Apr 24 23:39:11.020194 containerd[1463]: time="2026-04-24T23:39:11.018928458Z" level=info msg="StartContainer for \"e2ea2fa68e0bab65b3d574828c24c6b62dc27accff749742412cb8657c73b761\"" Apr 24 23:39:11.074893 containerd[1463]: time="2026-04-24T23:39:11.074835437Z" level=error msg="StopPodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" failed" error="failed to destroy network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.075360 containerd[1463]: time="2026-04-24T23:39:11.075286681Z" level=error msg="StopPodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" failed" error="failed to destroy network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.075859 kubelet[2510]: E0424 23:39:11.075833 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:11.076089 kubelet[2510]: E0424 23:39:11.076056 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994"} Apr 24 23:39:11.076259 kubelet[2510]: E0424 23:39:11.076248 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"432f514d-2771-4770-8cbd-167f3881d2c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.076420 kubelet[2510]: E0424 23:39:11.076404 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"432f514d-2771-4770-8cbd-167f3881d2c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2j2n" podUID="432f514d-2771-4770-8cbd-167f3881d2c4" Apr 24 23:39:11.076524 kubelet[2510]: E0424 23:39:11.075942 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:11.076568 kubelet[2510]: E0424 23:39:11.076560 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25"} Apr 24 23:39:11.076649 kubelet[2510]: E0424 23:39:11.076640 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ef9041b-388f-45eb-96a4-401487c8a29a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.076735 kubelet[2510]: E0424 23:39:11.076725 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ef9041b-388f-45eb-96a4-401487c8a29a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4kc84" podUID="8ef9041b-388f-45eb-96a4-401487c8a29a" Apr 24 23:39:11.079529 containerd[1463]: time="2026-04-24T23:39:11.079460271Z" level=error msg="StopPodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" failed" error="failed to destroy network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.079743 kubelet[2510]: E0424 23:39:11.079684 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:11.079743 kubelet[2510]: E0424 23:39:11.079727 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002"} Apr 24 23:39:11.079796 kubelet[2510]: E0424 23:39:11.079752 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"930956ca-d26d-4366-9b81-420c40db0a9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.079796 kubelet[2510]: E0424 23:39:11.079773 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"930956ca-d26d-4366-9b81-420c40db0a9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-88785c5b9-j7rvq" podUID="930956ca-d26d-4366-9b81-420c40db0a9a" Apr 24 23:39:11.080080 containerd[1463]: time="2026-04-24T23:39:11.079980125Z" level=error msg="StopPodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" failed" error="failed to destroy network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.080717 kubelet[2510]: E0424 23:39:11.080698 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:11.080753 containerd[1463]: time="2026-04-24T23:39:11.080736573Z" level=error msg="StopPodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" failed" error="failed to destroy network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.080799 kubelet[2510]: E0424 23:39:11.080790 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8"} Apr 24 23:39:11.080836 kubelet[2510]: E0424 23:39:11.080829 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.080961 kubelet[2510]: E0424 23:39:11.080906 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:11.080961 kubelet[2510]: E0424 23:39:11.080953 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833"} Apr 24 23:39:11.080995 kubelet[2510]: E0424 23:39:11.080973 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.081048 kubelet[2510]: E0424 23:39:11.080992 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-88785c5b9-pnfsw" podUID="d2fbecdb-999b-4ff1-ac90-f81a5cfb1384" Apr 24 23:39:11.081048 kubelet[2510]: E0424 23:39:11.080920 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" podUID="90155a0c-e2a3-4e4e-bd3b-19b850d41c9a" Apr 24 23:39:11.081797 containerd[1463]: time="2026-04-24T23:39:11.081763029Z" level=error msg="StopPodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" failed" error="failed to destroy network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.081842 containerd[1463]: time="2026-04-24T23:39:11.081797672Z" level=error msg="StopPodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" failed" error="failed to destroy network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.082183 kubelet[2510]: E0424 23:39:11.082087 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:11.082213 kubelet[2510]: E0424 23:39:11.082204 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429"} Apr 24 23:39:11.082244 kubelet[2510]: E0424 23:39:11.082221 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b38ee6f1-72cb-4a71-aae4-824398193815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.082306 kubelet[2510]: E0424 23:39:11.082084 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:11.082306 kubelet[2510]: E0424 23:39:11.082273 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5"} Apr 24 23:39:11.082306 kubelet[2510]: E0424 23:39:11.082285 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"552c601a-b830-4d6c-90dc-907cfec7edbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.082306 kubelet[2510]: E0424 23:39:11.082298 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"552c601a-b830-4d6c-90dc-907cfec7edbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-7nrvs" podUID="552c601a-b830-4d6c-90dc-907cfec7edbf" Apr 24 23:39:11.082442 kubelet[2510]: E0424 23:39:11.082317 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b38ee6f1-72cb-4a71-aae4-824398193815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bffb8c7cd-rxlxx" podUID="b38ee6f1-72cb-4a71-aae4-824398193815" Apr 24 23:39:11.082863 containerd[1463]: time="2026-04-24T23:39:11.082817184Z" level=error msg="StopPodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" failed" error="failed to destroy network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:39:11.083060 kubelet[2510]: E0424 23:39:11.083031 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:11.083155 kubelet[2510]: E0424 23:39:11.083067 2510 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c"} Apr 24 23:39:11.083155 kubelet[2510]: E0424 23:39:11.083084 2510 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67600210-1e44-4182-a447-6bd334f7adf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:39:11.083233 kubelet[2510]: E0424 23:39:11.083156 2510 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67600210-1e44-4182-a447-6bd334f7adf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-htcpb" podUID="67600210-1e44-4182-a447-6bd334f7adf6" Apr 24 23:39:11.107335 systemd[1]: Started cri-containerd-e2ea2fa68e0bab65b3d574828c24c6b62dc27accff749742412cb8657c73b761.scope - libcontainer container e2ea2fa68e0bab65b3d574828c24c6b62dc27accff749742412cb8657c73b761. Apr 24 23:39:11.136380 containerd[1463]: time="2026-04-24T23:39:11.136320261Z" level=info msg="StartContainer for \"e2ea2fa68e0bab65b3d574828c24c6b62dc27accff749742412cb8657c73b761\" returns successfully" Apr 24 23:39:12.003158 containerd[1463]: time="2026-04-24T23:39:12.002853868Z" level=info msg="StopPodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\"" Apr 24 23:39:12.035908 kubelet[2510]: I0424 23:39:12.034776 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vcwgz" podStartSLOduration=3.4186240469999998 podStartE2EDuration="19.034762197s" podCreationTimestamp="2026-04-24 23:38:53 +0000 UTC" firstStartedPulling="2026-04-24 23:38:53.835015427 +0000 UTC m=+17.200092087" lastFinishedPulling="2026-04-24 23:39:09.451153578 +0000 UTC m=+32.816230237" observedRunningTime="2026-04-24 23:39:12.03406734 +0000 UTC m=+35.399144016" watchObservedRunningTime="2026-04-24 23:39:12.034762197 +0000 UTC m=+35.399838871" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.111 [INFO][3874] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.111 [INFO][3874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" iface="eth0" netns="/var/run/netns/cni-d759342e-b83b-9824-6fa4-57b8dd518a69" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.112 [INFO][3874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" iface="eth0" netns="/var/run/netns/cni-d759342e-b83b-9824-6fa4-57b8dd518a69" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.113 [INFO][3874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" iface="eth0" netns="/var/run/netns/cni-d759342e-b83b-9824-6fa4-57b8dd518a69" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.113 [INFO][3874] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.113 [INFO][3874] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.136 [INFO][3902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.136 [INFO][3902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.136 [INFO][3902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.153 [WARNING][3902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.153 [INFO][3902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.155 [INFO][3902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:12.159501 containerd[1463]: 2026-04-24 23:39:12.157 [INFO][3874] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:12.160159 containerd[1463]: time="2026-04-24T23:39:12.159901025Z" level=info msg="TearDown network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" successfully" Apr 24 23:39:12.160159 containerd[1463]: time="2026-04-24T23:39:12.159969519Z" level=info msg="StopPodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" returns successfully" Apr 24 23:39:12.161656 systemd[1]: run-netns-cni\x2dd759342e\x2db83b\x2d9824\x2d6fa4\x2d57b8dd518a69.mount: Deactivated successfully. Apr 24 23:39:12.239464 kubelet[2510]: I0424 23:39:12.239345 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-nginx-config\") pod \"b38ee6f1-72cb-4a71-aae4-824398193815\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " Apr 24 23:39:12.239464 kubelet[2510]: I0424 23:39:12.239413 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdfwx\" (UniqueName: \"kubernetes.io/projected/b38ee6f1-72cb-4a71-aae4-824398193815-kube-api-access-xdfwx\") pod \"b38ee6f1-72cb-4a71-aae4-824398193815\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " Apr 24 23:39:12.239464 kubelet[2510]: I0424 23:39:12.239460 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-ca-bundle\") pod \"b38ee6f1-72cb-4a71-aae4-824398193815\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " Apr 24 23:39:12.239464 kubelet[2510]: I0424 23:39:12.239487 2510 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-backend-key-pair\") pod \"b38ee6f1-72cb-4a71-aae4-824398193815\" (UID: \"b38ee6f1-72cb-4a71-aae4-824398193815\") " Apr 24 23:39:12.240737 kubelet[2510]: I0424 23:39:12.240656 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "b38ee6f1-72cb-4a71-aae4-824398193815" (UID: "b38ee6f1-72cb-4a71-aae4-824398193815"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:39:12.240737 kubelet[2510]: I0424 23:39:12.240699 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b38ee6f1-72cb-4a71-aae4-824398193815" (UID: "b38ee6f1-72cb-4a71-aae4-824398193815"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:39:12.244367 kubelet[2510]: I0424 23:39:12.244328 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b38ee6f1-72cb-4a71-aae4-824398193815" (UID: "b38ee6f1-72cb-4a71-aae4-824398193815"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:39:12.244570 kubelet[2510]: I0424 23:39:12.244512 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38ee6f1-72cb-4a71-aae4-824398193815-kube-api-access-xdfwx" (OuterVolumeSpecName: "kube-api-access-xdfwx") pod "b38ee6f1-72cb-4a71-aae4-824398193815" (UID: "b38ee6f1-72cb-4a71-aae4-824398193815"). InnerVolumeSpecName "kube-api-access-xdfwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:39:12.246390 systemd[1]: var-lib-kubelet-pods-b38ee6f1\x2d72cb\x2d4a71\x2daae4\x2d824398193815-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdfwx.mount: Deactivated successfully. Apr 24 23:39:12.246485 systemd[1]: var-lib-kubelet-pods-b38ee6f1\x2d72cb\x2d4a71\x2daae4\x2d824398193815-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:39:12.340833 kubelet[2510]: I0424 23:39:12.340349 2510 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xdfwx\" (UniqueName: \"kubernetes.io/projected/b38ee6f1-72cb-4a71-aae4-824398193815-kube-api-access-xdfwx\") on node \"localhost\" DevicePath \"\"" Apr 24 23:39:12.340833 kubelet[2510]: I0424 23:39:12.340388 2510 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 24 23:39:12.340833 kubelet[2510]: I0424 23:39:12.340396 2510 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b38ee6f1-72cb-4a71-aae4-824398193815-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 24 23:39:12.340833 kubelet[2510]: I0424 23:39:12.340428 2510 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b38ee6f1-72cb-4a71-aae4-824398193815-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 24 23:39:12.813477 systemd[1]: Removed slice kubepods-besteffort-podb38ee6f1_72cb_4a71_aae4_824398193815.slice - libcontainer container kubepods-besteffort-podb38ee6f1_72cb_4a71_aae4_824398193815.slice. Apr 24 23:39:13.012284 kernel: calico-node[3954]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:39:13.090980 systemd[1]: Created slice kubepods-besteffort-pod245d6535_8019_4570_9c61_7be0528731d0.slice - libcontainer container kubepods-besteffort-pod245d6535_8019_4570_9c61_7be0528731d0.slice. Apr 24 23:39:13.148615 kubelet[2510]: I0424 23:39:13.147805 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245d6535-8019-4570-9c61-7be0528731d0-whisker-ca-bundle\") pod \"whisker-58d6c8f9f9-4ngsq\" (UID: \"245d6535-8019-4570-9c61-7be0528731d0\") " pod="calico-system/whisker-58d6c8f9f9-4ngsq" Apr 24 23:39:13.148615 kubelet[2510]: I0424 23:39:13.148420 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24778\" (UniqueName: \"kubernetes.io/projected/245d6535-8019-4570-9c61-7be0528731d0-kube-api-access-24778\") pod \"whisker-58d6c8f9f9-4ngsq\" (UID: \"245d6535-8019-4570-9c61-7be0528731d0\") " pod="calico-system/whisker-58d6c8f9f9-4ngsq" Apr 24 23:39:13.148615 kubelet[2510]: I0424 23:39:13.148473 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/245d6535-8019-4570-9c61-7be0528731d0-nginx-config\") pod \"whisker-58d6c8f9f9-4ngsq\" (UID: \"245d6535-8019-4570-9c61-7be0528731d0\") " pod="calico-system/whisker-58d6c8f9f9-4ngsq" Apr 24 23:39:13.148615 kubelet[2510]: I0424 23:39:13.148489 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245d6535-8019-4570-9c61-7be0528731d0-whisker-backend-key-pair\") pod \"whisker-58d6c8f9f9-4ngsq\" (UID: \"245d6535-8019-4570-9c61-7be0528731d0\") " pod="calico-system/whisker-58d6c8f9f9-4ngsq" Apr 24 23:39:13.400190 containerd[1463]: time="2026-04-24T23:39:13.400100161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d6c8f9f9-4ngsq,Uid:245d6535-8019-4570-9c61-7be0528731d0,Namespace:calico-system,Attempt:0,}" Apr 24 23:39:13.521470 systemd-networkd[1389]: vxlan.calico: Link UP Apr 24 23:39:13.521477 systemd-networkd[1389]: vxlan.calico: Gained carrier Apr 24 23:39:13.582856 systemd-networkd[1389]: calib60f2924e9d: Link UP Apr 24 23:39:13.583278 systemd-networkd[1389]: calib60f2924e9d: Gained carrier Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.458 [INFO][4077] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0 whisker-58d6c8f9f9- calico-system 245d6535-8019-4570-9c61-7be0528731d0 919 0 2026-04-24 23:39:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58d6c8f9f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-58d6c8f9f9-4ngsq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib60f2924e9d [] [] }} ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.459 [INFO][4077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.503 [INFO][4106] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" HandleID="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Workload="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.511 [INFO][4106] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" HandleID="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Workload="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-58d6c8f9f9-4ngsq", "timestamp":"2026-04-24 23:39:13.503380092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017a6e0)} Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.511 [INFO][4106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.511 [INFO][4106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.511 [INFO][4106] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.517 [INFO][4106] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.525 [INFO][4106] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.545 [INFO][4106] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.549 [INFO][4106] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.551 [INFO][4106] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.552 [INFO][4106] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.559 [INFO][4106] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533 Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.566 [INFO][4106] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.574 [INFO][4106] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.574 [INFO][4106] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" host="localhost" Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.574 [INFO][4106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:13.598949 containerd[1463]: 2026-04-24 23:39:13.574 [INFO][4106] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" HandleID="k8s-pod-network.1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Workload="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.599537 containerd[1463]: 2026-04-24 23:39:13.578 [INFO][4077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0", GenerateName:"whisker-58d6c8f9f9-", Namespace:"calico-system", SelfLink:"", UID:"245d6535-8019-4570-9c61-7be0528731d0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58d6c8f9f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-58d6c8f9f9-4ngsq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib60f2924e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:13.599537 containerd[1463]: 2026-04-24 23:39:13.578 [INFO][4077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.599537 containerd[1463]: 2026-04-24 23:39:13.578 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib60f2924e9d ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.599537 containerd[1463]: 2026-04-24 23:39:13.584 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.599537 containerd[1463]: 2026-04-24 23:39:13.584 [INFO][4077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0", GenerateName:"whisker-58d6c8f9f9-", Namespace:"calico-system", SelfLink:"", UID:"245d6535-8019-4570-9c61-7be0528731d0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58d6c8f9f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533", Pod:"whisker-58d6c8f9f9-4ngsq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib60f2924e9d", MAC:"36:22:de:4c:e0:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:13.599537 containerd[1463]: 2026-04-24 23:39:13.594 [INFO][4077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533" Namespace="calico-system" Pod="whisker-58d6c8f9f9-4ngsq" WorkloadEndpoint="localhost-k8s-whisker--58d6c8f9f9--4ngsq-eth0" Apr 24 23:39:13.633886 containerd[1463]: time="2026-04-24T23:39:13.633522774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:13.633886 containerd[1463]: time="2026-04-24T23:39:13.633568305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:13.633886 containerd[1463]: time="2026-04-24T23:39:13.633579596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:13.633886 containerd[1463]: time="2026-04-24T23:39:13.633684219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:13.664878 systemd[1]: Started cri-containerd-1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533.scope - libcontainer container 1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533. Apr 24 23:39:13.682777 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:13.711969 containerd[1463]: time="2026-04-24T23:39:13.711926064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d6c8f9f9-4ngsq,Uid:245d6535-8019-4570-9c61-7be0528731d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533\"" Apr 24 23:39:13.713521 containerd[1463]: time="2026-04-24T23:39:13.713484973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:39:14.648872 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Apr 24 23:39:14.809824 kubelet[2510]: I0424 23:39:14.809746 2510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38ee6f1-72cb-4a71-aae4-824398193815" path="/var/lib/kubelet/pods/b38ee6f1-72cb-4a71-aae4-824398193815/volumes" Apr 24 23:39:15.201396 containerd[1463]: time="2026-04-24T23:39:15.201226775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:15.203151 containerd[1463]: time="2026-04-24T23:39:15.202371130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:39:15.204030 containerd[1463]: time="2026-04-24T23:39:15.203977978Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:15.206758 containerd[1463]: time="2026-04-24T23:39:15.206665389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:15.207756 containerd[1463]: time="2026-04-24T23:39:15.207718329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.494190875s" Apr 24 23:39:15.207756 containerd[1463]: time="2026-04-24T23:39:15.207754584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:39:15.214665 containerd[1463]: time="2026-04-24T23:39:15.214553858Z" level=info msg="CreateContainer within sandbox \"1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:39:15.232261 containerd[1463]: time="2026-04-24T23:39:15.232101110Z" level=info msg="CreateContainer within sandbox \"1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"92cc1bf2d678706f9d74541381dc1ba03f6798342ee732db3a8491b22a3e0f20\"" Apr 24 23:39:15.233013 containerd[1463]: time="2026-04-24T23:39:15.232770416Z" level=info msg="StartContainer for \"92cc1bf2d678706f9d74541381dc1ba03f6798342ee732db3a8491b22a3e0f20\"" Apr 24 23:39:15.273290 systemd[1]: Started cri-containerd-92cc1bf2d678706f9d74541381dc1ba03f6798342ee732db3a8491b22a3e0f20.scope - libcontainer container 92cc1bf2d678706f9d74541381dc1ba03f6798342ee732db3a8491b22a3e0f20. Apr 24 23:39:15.318543 containerd[1463]: time="2026-04-24T23:39:15.318408898Z" level=info msg="StartContainer for \"92cc1bf2d678706f9d74541381dc1ba03f6798342ee732db3a8491b22a3e0f20\" returns successfully" Apr 24 23:39:15.321926 containerd[1463]: time="2026-04-24T23:39:15.319858443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:39:15.544684 systemd-networkd[1389]: calib60f2924e9d: Gained IPv6LL Apr 24 23:39:17.205808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234166179.mount: Deactivated successfully. Apr 24 23:39:17.231474 containerd[1463]: time="2026-04-24T23:39:17.231321082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:17.232390 containerd[1463]: time="2026-04-24T23:39:17.232344284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:39:17.233772 containerd[1463]: time="2026-04-24T23:39:17.233724394Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:17.235812 containerd[1463]: time="2026-04-24T23:39:17.235767245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:17.238854 containerd[1463]: time="2026-04-24T23:39:17.238813311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.918929931s" Apr 24 23:39:17.238854 containerd[1463]: time="2026-04-24T23:39:17.238852644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:39:17.244278 containerd[1463]: time="2026-04-24T23:39:17.244243285Z" level=info msg="CreateContainer within sandbox \"1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:39:17.259963 containerd[1463]: time="2026-04-24T23:39:17.259902015Z" level=info msg="CreateContainer within sandbox \"1d27d620e9394632eb21fc924ec420ff5e1b61d9be2736c2ae6875c0c5da1533\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c9a86ea3bac36f5fc030e2b25a469cfc2cb983ba4f6a9011d4127dd35df54ca7\"" Apr 24 23:39:17.260941 containerd[1463]: time="2026-04-24T23:39:17.260910929Z" level=info msg="StartContainer for \"c9a86ea3bac36f5fc030e2b25a469cfc2cb983ba4f6a9011d4127dd35df54ca7\"" Apr 24 23:39:17.291284 systemd[1]: Started cri-containerd-c9a86ea3bac36f5fc030e2b25a469cfc2cb983ba4f6a9011d4127dd35df54ca7.scope - libcontainer container c9a86ea3bac36f5fc030e2b25a469cfc2cb983ba4f6a9011d4127dd35df54ca7. Apr 24 23:39:17.332427 containerd[1463]: time="2026-04-24T23:39:17.332358994Z" level=info msg="StartContainer for \"c9a86ea3bac36f5fc030e2b25a469cfc2cb983ba4f6a9011d4127dd35df54ca7\" returns successfully" Apr 24 23:39:18.218699 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:40442.service - OpenSSH per-connection server daemon (10.0.0.1:40442). Apr 24 23:39:18.302189 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 40442 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:18.303974 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:18.310827 systemd-logind[1453]: New session 8 of user core. Apr 24 23:39:18.316277 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:39:18.495709 sshd[4366]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:18.499209 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:40442.service: Deactivated successfully. Apr 24 23:39:18.500686 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:39:18.501261 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:39:18.502142 systemd-logind[1453]: Removed session 8. Apr 24 23:39:22.811249 containerd[1463]: time="2026-04-24T23:39:22.809747833Z" level=info msg="StopPodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\"" Apr 24 23:39:22.865350 kubelet[2510]: I0424 23:39:22.865230 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58d6c8f9f9-4ngsq" podStartSLOduration=6.338429188 podStartE2EDuration="9.865212922s" podCreationTimestamp="2026-04-24 23:39:13 +0000 UTC" firstStartedPulling="2026-04-24 23:39:13.713186615 +0000 UTC m=+37.078263274" lastFinishedPulling="2026-04-24 23:39:17.239970349 +0000 UTC m=+40.605047008" observedRunningTime="2026-04-24 23:39:18.046238111 +0000 UTC m=+41.411314780" watchObservedRunningTime="2026-04-24 23:39:22.865212922 +0000 UTC m=+46.230289592" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.866 [INFO][4419] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.866 [INFO][4419] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" iface="eth0" netns="/var/run/netns/cni-be42914e-ea5b-d599-14b4-8581420dd8b0" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.866 [INFO][4419] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" iface="eth0" netns="/var/run/netns/cni-be42914e-ea5b-d599-14b4-8581420dd8b0" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.866 [INFO][4419] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" iface="eth0" netns="/var/run/netns/cni-be42914e-ea5b-d599-14b4-8581420dd8b0" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.866 [INFO][4419] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.866 [INFO][4419] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.887 [INFO][4427] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.887 [INFO][4427] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.887 [INFO][4427] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.897 [WARNING][4427] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.897 [INFO][4427] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.899 [INFO][4427] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:22.903382 containerd[1463]: 2026-04-24 23:39:22.901 [INFO][4419] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:22.905793 systemd[1]: run-netns-cni\x2dbe42914e\x2dea5b\x2dd599\x2d14b4\x2d8581420dd8b0.mount: Deactivated successfully. Apr 24 23:39:22.906488 containerd[1463]: time="2026-04-24T23:39:22.906320444Z" level=info msg="TearDown network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" successfully" Apr 24 23:39:22.906488 containerd[1463]: time="2026-04-24T23:39:22.906346896Z" level=info msg="StopPodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" returns successfully" Apr 24 23:39:22.913095 containerd[1463]: time="2026-04-24T23:39:22.912929140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-pnfsw,Uid:d2fbecdb-999b-4ff1-ac90-f81a5cfb1384,Namespace:calico-system,Attempt:1,}" Apr 24 23:39:23.091800 systemd-networkd[1389]: cali38018d04c8e: Link UP Apr 24 23:39:23.092532 systemd-networkd[1389]: cali38018d04c8e: Gained carrier Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.018 [INFO][4434] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0 calico-apiserver-88785c5b9- calico-system d2fbecdb-999b-4ff1-ac90-f81a5cfb1384 1012 0 2026-04-24 23:38:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88785c5b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-88785c5b9-pnfsw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali38018d04c8e [] [] }} ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.018 [INFO][4434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.042 [INFO][4447] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" HandleID="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.048 [INFO][4447] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" HandleID="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bdf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-88785c5b9-pnfsw", "timestamp":"2026-04-24 23:39:23.042622925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000419b80)} Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.048 [INFO][4447] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.048 [INFO][4447] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.048 [INFO][4447] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.050 [INFO][4447] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.054 [INFO][4447] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.057 [INFO][4447] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.059 [INFO][4447] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.061 [INFO][4447] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.061 [INFO][4447] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.062 [INFO][4447] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.070 [INFO][4447] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.084 [INFO][4447] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.085 [INFO][4447] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" host="localhost" Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.085 [INFO][4447] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:23.112752 containerd[1463]: 2026-04-24 23:39:23.085 [INFO][4447] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" HandleID="k8s-pod-network.dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.113265 containerd[1463]: 2026-04-24 23:39:23.087 [INFO][4434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-88785c5b9-pnfsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali38018d04c8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:23.113265 containerd[1463]: 2026-04-24 23:39:23.088 [INFO][4434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.113265 containerd[1463]: 2026-04-24 23:39:23.088 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38018d04c8e ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.113265 containerd[1463]: 2026-04-24 23:39:23.092 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.113265 containerd[1463]: 2026-04-24 23:39:23.092 [INFO][4434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc", Pod:"calico-apiserver-88785c5b9-pnfsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali38018d04c8e", MAC:"ae:be:91:1b:03:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:23.113265 containerd[1463]: 2026-04-24 23:39:23.110 [INFO][4434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-pnfsw" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:23.135238 containerd[1463]: time="2026-04-24T23:39:23.135072407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:23.135575 containerd[1463]: time="2026-04-24T23:39:23.135212450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:23.135575 containerd[1463]: time="2026-04-24T23:39:23.135239387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:23.135575 containerd[1463]: time="2026-04-24T23:39:23.135351628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:23.162322 systemd[1]: Started cri-containerd-dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc.scope - libcontainer container dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc. Apr 24 23:39:23.183095 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:23.212794 containerd[1463]: time="2026-04-24T23:39:23.212720560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-pnfsw,Uid:d2fbecdb-999b-4ff1-ac90-f81a5cfb1384,Namespace:calico-system,Attempt:1,} returns sandbox id \"dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc\"" Apr 24 23:39:23.216588 containerd[1463]: time="2026-04-24T23:39:23.216518106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:39:23.511498 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:41916.service - OpenSSH per-connection server daemon (10.0.0.1:41916). Apr 24 23:39:23.543507 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 41916 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:23.544504 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:23.547815 systemd-logind[1453]: New session 9 of user core. Apr 24 23:39:23.557269 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:39:23.664990 sshd[4523]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:23.667857 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:41916.service: Deactivated successfully. Apr 24 23:39:23.669200 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:39:23.669721 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:39:23.670432 systemd-logind[1453]: Removed session 9. Apr 24 23:39:23.807599 containerd[1463]: time="2026-04-24T23:39:23.807377108Z" level=info msg="StopPodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\"" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.856 [INFO][4549] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.857 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" iface="eth0" netns="/var/run/netns/cni-ae0510b5-8faa-f776-2084-06e859542a4f" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.857 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" iface="eth0" netns="/var/run/netns/cni-ae0510b5-8faa-f776-2084-06e859542a4f" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.857 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" iface="eth0" netns="/var/run/netns/cni-ae0510b5-8faa-f776-2084-06e859542a4f" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.857 [INFO][4549] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.857 [INFO][4549] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.876 [INFO][4567] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.876 [INFO][4567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.876 [INFO][4567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.887 [WARNING][4567] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.888 [INFO][4567] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.903 [INFO][4567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:23.907878 containerd[1463]: 2026-04-24 23:39:23.905 [INFO][4549] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:23.910610 containerd[1463]: time="2026-04-24T23:39:23.908229210Z" level=info msg="TearDown network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" successfully" Apr 24 23:39:23.910610 containerd[1463]: time="2026-04-24T23:39:23.908252052Z" level=info msg="StopPodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" returns successfully" Apr 24 23:39:23.912777 systemd[1]: run-netns-cni\x2dae0510b5\x2d8faa\x2df776\x2d2084\x2d06e859542a4f.mount: Deactivated successfully. Apr 24 23:39:23.916235 containerd[1463]: time="2026-04-24T23:39:23.916178507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7nrvs,Uid:552c601a-b830-4d6c-90dc-907cfec7edbf,Namespace:calico-system,Attempt:1,}" Apr 24 23:39:24.153735 systemd-networkd[1389]: cali15711d74d09: Link UP Apr 24 23:39:24.153874 systemd-networkd[1389]: cali15711d74d09: Gained carrier Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.048 [INFO][4576] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0 goldmane-cccfbd5cf- calico-system 552c601a-b830-4d6c-90dc-907cfec7edbf 1023 0 2026-04-24 23:38:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-7nrvs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali15711d74d09 [] [] }} ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.049 [INFO][4576] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.084 [INFO][4590] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" HandleID="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.093 [INFO][4590] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" HandleID="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000518870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-7nrvs", "timestamp":"2026-04-24 23:39:24.084016416 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c6c60)} Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.093 [INFO][4590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.093 [INFO][4590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.093 [INFO][4590] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.095 [INFO][4590] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.106 [INFO][4590] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.111 [INFO][4590] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.113 [INFO][4590] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.121 [INFO][4590] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.121 [INFO][4590] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.128 [INFO][4590] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1 Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.138 [INFO][4590] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.149 [INFO][4590] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.149 [INFO][4590] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" host="localhost" Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.149 [INFO][4590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:24.165363 containerd[1463]: 2026-04-24 23:39:24.149 [INFO][4590] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" HandleID="k8s-pod-network.31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.166547 containerd[1463]: 2026-04-24 23:39:24.151 [INFO][4576] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"552c601a-b830-4d6c-90dc-907cfec7edbf", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-7nrvs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15711d74d09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:24.166547 containerd[1463]: 2026-04-24 23:39:24.151 [INFO][4576] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.166547 containerd[1463]: 2026-04-24 23:39:24.151 [INFO][4576] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15711d74d09 ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.166547 containerd[1463]: 2026-04-24 23:39:24.153 [INFO][4576] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.166547 containerd[1463]: 2026-04-24 23:39:24.154 [INFO][4576] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"552c601a-b830-4d6c-90dc-907cfec7edbf", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1", Pod:"goldmane-cccfbd5cf-7nrvs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15711d74d09", MAC:"66:58:e3:c5:11:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:24.166547 containerd[1463]: 2026-04-24 23:39:24.162 [INFO][4576] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7nrvs" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:24.186367 containerd[1463]: time="2026-04-24T23:39:24.185854078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:24.186367 containerd[1463]: time="2026-04-24T23:39:24.185951285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:24.186367 containerd[1463]: time="2026-04-24T23:39:24.185967583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:24.186367 containerd[1463]: time="2026-04-24T23:39:24.186204345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:24.219280 systemd[1]: Started cri-containerd-31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1.scope - libcontainer container 31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1. Apr 24 23:39:24.228208 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:24.252748 containerd[1463]: time="2026-04-24T23:39:24.252709934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7nrvs,Uid:552c601a-b830-4d6c-90dc-907cfec7edbf,Namespace:calico-system,Attempt:1,} returns sandbox id \"31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1\"" Apr 24 23:39:24.632416 systemd-networkd[1389]: cali38018d04c8e: Gained IPv6LL Apr 24 23:39:24.916343 systemd[1]: run-containerd-runc-k8s.io-31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1-runc.Rgp9Xp.mount: Deactivated successfully. Apr 24 23:39:25.273183 systemd-networkd[1389]: cali15711d74d09: Gained IPv6LL Apr 24 23:39:25.483251 containerd[1463]: time="2026-04-24T23:39:25.483163859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:25.484695 containerd[1463]: time="2026-04-24T23:39:25.484385300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:39:25.485417 containerd[1463]: time="2026-04-24T23:39:25.485378326Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:25.487618 containerd[1463]: time="2026-04-24T23:39:25.487574532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:25.488147 containerd[1463]: time="2026-04-24T23:39:25.488101277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.271548486s" Apr 24 23:39:25.488188 containerd[1463]: time="2026-04-24T23:39:25.488151261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:39:25.489727 containerd[1463]: time="2026-04-24T23:39:25.489705766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:39:25.495182 containerd[1463]: time="2026-04-24T23:39:25.495152986Z" level=info msg="CreateContainer within sandbox \"dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:39:25.509456 containerd[1463]: time="2026-04-24T23:39:25.509374354Z" level=info msg="CreateContainer within sandbox \"dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"834c7122bc00bc0d5e6f8b31f9052b603b6341124df1ec423d8cb194723c726f\"" Apr 24 23:39:25.510713 containerd[1463]: time="2026-04-24T23:39:25.510620739Z" level=info msg="StartContainer for \"834c7122bc00bc0d5e6f8b31f9052b603b6341124df1ec423d8cb194723c726f\"" Apr 24 23:39:25.545249 systemd[1]: Started cri-containerd-834c7122bc00bc0d5e6f8b31f9052b603b6341124df1ec423d8cb194723c726f.scope - libcontainer container 834c7122bc00bc0d5e6f8b31f9052b603b6341124df1ec423d8cb194723c726f. Apr 24 23:39:25.605082 containerd[1463]: time="2026-04-24T23:39:25.604877832Z" level=info msg="StartContainer for \"834c7122bc00bc0d5e6f8b31f9052b603b6341124df1ec423d8cb194723c726f\" returns successfully" Apr 24 23:39:25.809199 containerd[1463]: time="2026-04-24T23:39:25.807434672Z" level=info msg="StopPodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\"" Apr 24 23:39:25.809199 containerd[1463]: time="2026-04-24T23:39:25.807896789Z" level=info msg="StopPodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\"" Apr 24 23:39:25.809199 containerd[1463]: time="2026-04-24T23:39:25.807983418Z" level=info msg="StopPodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\"" Apr 24 23:39:25.809199 containerd[1463]: time="2026-04-24T23:39:25.809025139Z" level=info msg="StopPodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\"" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.879 [INFO][4758] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.879 [INFO][4758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" iface="eth0" netns="/var/run/netns/cni-f53812fc-d1c5-0fc3-77b0-733fbe4ad917" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.883 [INFO][4758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" iface="eth0" netns="/var/run/netns/cni-f53812fc-d1c5-0fc3-77b0-733fbe4ad917" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.885 [INFO][4758] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" iface="eth0" netns="/var/run/netns/cni-f53812fc-d1c5-0fc3-77b0-733fbe4ad917" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.885 [INFO][4758] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.885 [INFO][4758] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.926 [INFO][4789] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.926 [INFO][4789] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.926 [INFO][4789] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.936 [WARNING][4789] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.936 [INFO][4789] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.937 [INFO][4789] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:25.943055 containerd[1463]: 2026-04-24 23:39:25.940 [INFO][4758] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:25.944720 systemd[1]: run-netns-cni\x2df53812fc\x2dd1c5\x2d0fc3\x2d77b0\x2d733fbe4ad917.mount: Deactivated successfully. Apr 24 23:39:25.945191 containerd[1463]: time="2026-04-24T23:39:25.945149434Z" level=info msg="TearDown network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" successfully" Apr 24 23:39:25.945191 containerd[1463]: time="2026-04-24T23:39:25.945177217Z" level=info msg="StopPodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" returns successfully" Apr 24 23:39:25.947985 containerd[1463]: time="2026-04-24T23:39:25.947963620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-j7rvq,Uid:930956ca-d26d-4366-9b81-420c40db0a9a,Namespace:calico-system,Attempt:1,}" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.885 [INFO][4756] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.885 [INFO][4756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" iface="eth0" netns="/var/run/netns/cni-5fa9aee6-68d0-e5b5-1693-54c78c94af9a" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.886 [INFO][4756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" iface="eth0" netns="/var/run/netns/cni-5fa9aee6-68d0-e5b5-1693-54c78c94af9a" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.887 [INFO][4756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" iface="eth0" netns="/var/run/netns/cni-5fa9aee6-68d0-e5b5-1693-54c78c94af9a" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.887 [INFO][4756] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.887 [INFO][4756] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.929 [INFO][4791] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.929 [INFO][4791] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.937 [INFO][4791] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.945 [WARNING][4791] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.945 [INFO][4791] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.947 [INFO][4791] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:25.950564 containerd[1463]: 2026-04-24 23:39:25.949 [INFO][4756] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:25.950869 containerd[1463]: time="2026-04-24T23:39:25.950716094Z" level=info msg="TearDown network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" successfully" Apr 24 23:39:25.950869 containerd[1463]: time="2026-04-24T23:39:25.950731651Z" level=info msg="StopPodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" returns successfully" Apr 24 23:39:25.955030 systemd[1]: run-netns-cni\x2d5fa9aee6\x2d68d0\x2de5b5\x2d1693\x2d54c78c94af9a.mount: Deactivated successfully. Apr 24 23:39:25.957372 kubelet[2510]: E0424 23:39:25.955734 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:25.958914 containerd[1463]: time="2026-04-24T23:39:25.958882632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4kc84,Uid:8ef9041b-388f-45eb-96a4-401487c8a29a,Namespace:kube-system,Attempt:1,}" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.898 [INFO][4771] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.899 [INFO][4771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" iface="eth0" netns="/var/run/netns/cni-844d5d94-3aa6-81fb-e455-02a94e0c9e90" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.899 [INFO][4771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" iface="eth0" netns="/var/run/netns/cni-844d5d94-3aa6-81fb-e455-02a94e0c9e90" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.900 [INFO][4771] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" iface="eth0" netns="/var/run/netns/cni-844d5d94-3aa6-81fb-e455-02a94e0c9e90" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.900 [INFO][4771] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.900 [INFO][4771] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.932 [INFO][4809] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.932 [INFO][4809] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.947 [INFO][4809] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.959 [WARNING][4809] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.959 [INFO][4809] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.961 [INFO][4809] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:25.967356 containerd[1463]: 2026-04-24 23:39:25.963 [INFO][4771] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:25.967650 containerd[1463]: time="2026-04-24T23:39:25.967545596Z" level=info msg="TearDown network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" successfully" Apr 24 23:39:25.967650 containerd[1463]: time="2026-04-24T23:39:25.967564529Z" level=info msg="StopPodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" returns successfully" Apr 24 23:39:25.972170 containerd[1463]: time="2026-04-24T23:39:25.971211624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2j2n,Uid:432f514d-2771-4770-8cbd-167f3881d2c4,Namespace:calico-system,Attempt:1,}" Apr 24 23:39:25.972874 systemd[1]: run-netns-cni\x2d844d5d94\x2d3aa6\x2d81fb\x2de455\x2d02a94e0c9e90.mount: Deactivated successfully. Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.886 [INFO][4757] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.887 [INFO][4757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" iface="eth0" netns="/var/run/netns/cni-54c1e126-70bd-d8d4-5765-4ea51159b58c" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.887 [INFO][4757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" iface="eth0" netns="/var/run/netns/cni-54c1e126-70bd-d8d4-5765-4ea51159b58c" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.888 [INFO][4757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" iface="eth0" netns="/var/run/netns/cni-54c1e126-70bd-d8d4-5765-4ea51159b58c" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.888 [INFO][4757] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.888 [INFO][4757] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.932 [INFO][4793] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.932 [INFO][4793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.961 [INFO][4793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.969 [WARNING][4793] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.969 [INFO][4793] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.971 [INFO][4793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:25.984504 containerd[1463]: 2026-04-24 23:39:25.978 [INFO][4757] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:25.985854 containerd[1463]: time="2026-04-24T23:39:25.984585401Z" level=info msg="TearDown network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" successfully" Apr 24 23:39:25.985854 containerd[1463]: time="2026-04-24T23:39:25.984603763Z" level=info msg="StopPodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" returns successfully" Apr 24 23:39:25.989722 kubelet[2510]: E0424 23:39:25.989627 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:25.991832 containerd[1463]: time="2026-04-24T23:39:25.991740658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-htcpb,Uid:67600210-1e44-4182-a447-6bd334f7adf6,Namespace:kube-system,Attempt:1,}" Apr 24 23:39:26.084426 kubelet[2510]: I0424 23:39:26.082915 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-88785c5b9-pnfsw" podStartSLOduration=31.808835506 podStartE2EDuration="34.082901464s" podCreationTimestamp="2026-04-24 23:38:52 +0000 UTC" firstStartedPulling="2026-04-24 23:39:23.215432674 +0000 UTC m=+46.580509333" lastFinishedPulling="2026-04-24 23:39:25.489498632 +0000 UTC m=+48.854575291" observedRunningTime="2026-04-24 23:39:26.079823551 +0000 UTC m=+49.444900221" watchObservedRunningTime="2026-04-24 23:39:26.082901464 +0000 UTC m=+49.447978130" Apr 24 23:39:26.169503 systemd-networkd[1389]: caliaeab7766366: Link UP Apr 24 23:39:26.170307 systemd-networkd[1389]: caliaeab7766366: Gained carrier Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.053 [INFO][4832] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--4kc84-eth0 coredns-66bc5c9577- kube-system 8ef9041b-388f-45eb-96a4-401487c8a29a 1048 0 2026-04-24 23:38:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-4kc84 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaeab7766366 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.053 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.114 [INFO][4875] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" HandleID="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.121 [INFO][4875] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" HandleID="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-4kc84", "timestamp":"2026-04-24 23:39:26.114639945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d0f20)} Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.121 [INFO][4875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.121 [INFO][4875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.121 [INFO][4875] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.125 [INFO][4875] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.135 [INFO][4875] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.147 [INFO][4875] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.152 [INFO][4875] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.154 [INFO][4875] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.154 [INFO][4875] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.155 [INFO][4875] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4 Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.159 [INFO][4875] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.164 [INFO][4875] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.164 [INFO][4875] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" host="localhost" Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.164 [INFO][4875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:26.184423 containerd[1463]: 2026-04-24 23:39:26.164 [INFO][4875] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" HandleID="k8s-pod-network.7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.184877 containerd[1463]: 2026-04-24 23:39:26.166 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4kc84-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ef9041b-388f-45eb-96a4-401487c8a29a", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-4kc84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeab7766366", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.184877 containerd[1463]: 2026-04-24 23:39:26.166 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.184877 containerd[1463]: 2026-04-24 23:39:26.166 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeab7766366 ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.184877 containerd[1463]: 2026-04-24 23:39:26.170 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.184877 containerd[1463]: 2026-04-24 23:39:26.171 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4kc84-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ef9041b-388f-45eb-96a4-401487c8a29a", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4", Pod:"coredns-66bc5c9577-4kc84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeab7766366", MAC:"6e:3d:8e:31:0f:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.184877 containerd[1463]: 2026-04-24 23:39:26.179 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4" Namespace="kube-system" Pod="coredns-66bc5c9577-4kc84" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:26.208376 containerd[1463]: time="2026-04-24T23:39:26.208286187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:26.208503 containerd[1463]: time="2026-04-24T23:39:26.208399048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:26.208503 containerd[1463]: time="2026-04-24T23:39:26.208416881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.208683 containerd[1463]: time="2026-04-24T23:39:26.208588901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.236297 systemd[1]: Started cri-containerd-7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4.scope - libcontainer container 7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4. Apr 24 23:39:26.256276 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:26.320177 containerd[1463]: time="2026-04-24T23:39:26.319042560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4kc84,Uid:8ef9041b-388f-45eb-96a4-401487c8a29a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4\"" Apr 24 23:39:26.321961 kubelet[2510]: E0424 23:39:26.321898 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:26.341712 containerd[1463]: time="2026-04-24T23:39:26.341189259Z" level=info msg="CreateContainer within sandbox \"7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:39:26.373749 systemd-networkd[1389]: calie554e77e27f: Link UP Apr 24 23:39:26.376239 systemd-networkd[1389]: calie554e77e27f: Gained carrier Apr 24 23:39:26.394650 containerd[1463]: time="2026-04-24T23:39:26.394528916Z" level=info msg="CreateContainer within sandbox \"7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"358dbac01fa4cd220307b282839e37bfe6de70c4f11f8a608b8636e2c6275cee\"" Apr 24 23:39:26.398224 containerd[1463]: time="2026-04-24T23:39:26.396780056Z" level=info msg="StartContainer for \"358dbac01fa4cd220307b282839e37bfe6de70c4f11f8a608b8636e2c6275cee\"" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.053 [INFO][4844] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b2j2n-eth0 csi-node-driver- calico-system 432f514d-2771-4770-8cbd-167f3881d2c4 1049 0 2026-04-24 23:38:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b2j2n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie554e77e27f [] [] }} ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.054 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.110 [INFO][4881] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" HandleID="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.122 [INFO][4881] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" HandleID="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000301340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b2j2n", "timestamp":"2026-04-24 23:39:26.110411764 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000420f20)} Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.122 [INFO][4881] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.164 [INFO][4881] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.164 [INFO][4881] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.231 [INFO][4881] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.240 [INFO][4881] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.260 [INFO][4881] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.282 [INFO][4881] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.302 [INFO][4881] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.302 [INFO][4881] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.324 [INFO][4881] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.351 [INFO][4881] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.363 [INFO][4881] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.363 [INFO][4881] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" host="localhost" Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.363 [INFO][4881] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:26.463754 containerd[1463]: 2026-04-24 23:39:26.363 [INFO][4881] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" HandleID="k8s-pod-network.c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.465774 containerd[1463]: 2026-04-24 23:39:26.366 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2j2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"432f514d-2771-4770-8cbd-167f3881d2c4", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b2j2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie554e77e27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.465774 containerd[1463]: 2026-04-24 23:39:26.366 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.465774 containerd[1463]: 2026-04-24 23:39:26.366 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie554e77e27f ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.465774 containerd[1463]: 2026-04-24 23:39:26.381 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.465774 containerd[1463]: 2026-04-24 23:39:26.382 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2j2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"432f514d-2771-4770-8cbd-167f3881d2c4", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac", Pod:"csi-node-driver-b2j2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie554e77e27f", MAC:"c6:a6:f8:0d:03:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.465774 containerd[1463]: 2026-04-24 23:39:26.458 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac" Namespace="calico-system" Pod="csi-node-driver-b2j2n" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:26.483214 systemd[1]: Started cri-containerd-358dbac01fa4cd220307b282839e37bfe6de70c4f11f8a608b8636e2c6275cee.scope - libcontainer container 358dbac01fa4cd220307b282839e37bfe6de70c4f11f8a608b8636e2c6275cee. Apr 24 23:39:26.496984 containerd[1463]: time="2026-04-24T23:39:26.496772413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:26.496984 containerd[1463]: time="2026-04-24T23:39:26.496814268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:26.496984 containerd[1463]: time="2026-04-24T23:39:26.496822006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.496984 containerd[1463]: time="2026-04-24T23:39:26.496879321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.504366 systemd-networkd[1389]: cali3d3275631d8: Link UP Apr 24 23:39:26.506530 systemd-networkd[1389]: cali3d3275631d8: Gained carrier Apr 24 23:39:26.526268 systemd[1]: Started cri-containerd-c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac.scope - libcontainer container c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac. Apr 24 23:39:26.530021 containerd[1463]: time="2026-04-24T23:39:26.529983528Z" level=info msg="StartContainer for \"358dbac01fa4cd220307b282839e37bfe6de70c4f11f8a608b8636e2c6275cee\" returns successfully" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.054 [INFO][4821] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0 calico-apiserver-88785c5b9- calico-system 930956ca-d26d-4366-9b81-420c40db0a9a 1046 0 2026-04-24 23:38:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88785c5b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-88785c5b9-j7rvq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3d3275631d8 [] [] }} ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.055 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.121 [INFO][4874] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" HandleID="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.129 [INFO][4874] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" HandleID="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002764b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-88785c5b9-j7rvq", "timestamp":"2026-04-24 23:39:26.121383246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040e6e0)} Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.130 [INFO][4874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.363 [INFO][4874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.363 [INFO][4874] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.372 [INFO][4874] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.458 [INFO][4874] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.466 [INFO][4874] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.468 [INFO][4874] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.480 [INFO][4874] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.480 [INFO][4874] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.482 [INFO][4874] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82 Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.489 [INFO][4874] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.496 [INFO][4874] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.497 [INFO][4874] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" host="localhost" Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.497 [INFO][4874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:26.531140 containerd[1463]: 2026-04-24 23:39:26.497 [INFO][4874] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" HandleID="k8s-pod-network.1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.531547 containerd[1463]: 2026-04-24 23:39:26.500 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"930956ca-d26d-4366-9b81-420c40db0a9a", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-88785c5b9-j7rvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d3275631d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.531547 containerd[1463]: 2026-04-24 23:39:26.500 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.531547 containerd[1463]: 2026-04-24 23:39:26.500 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d3275631d8 ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.531547 containerd[1463]: 2026-04-24 23:39:26.507 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.531547 containerd[1463]: 2026-04-24 23:39:26.507 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"930956ca-d26d-4366-9b81-420c40db0a9a", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82", Pod:"calico-apiserver-88785c5b9-j7rvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d3275631d8", MAC:"66:3b:7c:81:e0:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.531547 containerd[1463]: 2026-04-24 23:39:26.525 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82" Namespace="calico-system" Pod="calico-apiserver-88785c5b9-j7rvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:26.556156 containerd[1463]: time="2026-04-24T23:39:26.553103021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:26.556156 containerd[1463]: time="2026-04-24T23:39:26.555509030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:26.556156 containerd[1463]: time="2026-04-24T23:39:26.555535037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.556156 containerd[1463]: time="2026-04-24T23:39:26.555635918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.574215 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:26.586096 systemd[1]: Started cri-containerd-1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82.scope - libcontainer container 1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82. Apr 24 23:39:26.597197 containerd[1463]: time="2026-04-24T23:39:26.596288158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2j2n,Uid:432f514d-2771-4770-8cbd-167f3881d2c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac\"" Apr 24 23:39:26.625764 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:26.629508 systemd-networkd[1389]: calief58c1de418: Link UP Apr 24 23:39:26.629790 systemd-networkd[1389]: calief58c1de418: Gained carrier Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.074 [INFO][4854] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--htcpb-eth0 coredns-66bc5c9577- kube-system 67600210-1e44-4182-a447-6bd334f7adf6 1047 0 2026-04-24 23:38:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-htcpb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief58c1de418 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.074 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.146 [INFO][4896] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" HandleID="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.153 [INFO][4896] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" HandleID="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138a80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-htcpb", "timestamp":"2026-04-24 23:39:26.146528826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005211e0)} Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.153 [INFO][4896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.497 [INFO][4896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.497 [INFO][4896] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.505 [INFO][4896] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.566 [INFO][4896] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.584 [INFO][4896] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.587 [INFO][4896] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.591 [INFO][4896] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.591 [INFO][4896] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.594 [INFO][4896] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814 Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.607 [INFO][4896] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.613 [INFO][4896] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.613 [INFO][4896] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" host="localhost" Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.613 [INFO][4896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:26.658351 containerd[1463]: 2026-04-24 23:39:26.613 [INFO][4896] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" HandleID="k8s-pod-network.8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.659469 containerd[1463]: 2026-04-24 23:39:26.617 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--htcpb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67600210-1e44-4182-a447-6bd334f7adf6", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-htcpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief58c1de418", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.659469 containerd[1463]: 2026-04-24 23:39:26.617 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.659469 containerd[1463]: 2026-04-24 23:39:26.617 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief58c1de418 ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.659469 containerd[1463]: 2026-04-24 23:39:26.630 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.659469 containerd[1463]: 2026-04-24 23:39:26.631 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--htcpb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67600210-1e44-4182-a447-6bd334f7adf6", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814", Pod:"coredns-66bc5c9577-htcpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief58c1de418", MAC:"c6:fb:e0:bf:96:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:26.659469 containerd[1463]: 2026-04-24 23:39:26.653 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814" Namespace="kube-system" Pod="coredns-66bc5c9577-htcpb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:26.673569 containerd[1463]: time="2026-04-24T23:39:26.673528049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88785c5b9-j7rvq,Uid:930956ca-d26d-4366-9b81-420c40db0a9a,Namespace:calico-system,Attempt:1,} returns sandbox id \"1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82\"" Apr 24 23:39:26.679933 containerd[1463]: time="2026-04-24T23:39:26.679698736Z" level=info msg="CreateContainer within sandbox \"1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:39:26.707193 containerd[1463]: time="2026-04-24T23:39:26.704139898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:26.711928 containerd[1463]: time="2026-04-24T23:39:26.709174405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:26.711928 containerd[1463]: time="2026-04-24T23:39:26.709204660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.711928 containerd[1463]: time="2026-04-24T23:39:26.709363441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:26.732964 containerd[1463]: time="2026-04-24T23:39:26.732856719Z" level=info msg="CreateContainer within sandbox \"1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03aa7bda3c5a6f73a11246687005bf7ddb9a273174f1b681b71e3bf0c2085ad3\"" Apr 24 23:39:26.735077 containerd[1463]: time="2026-04-24T23:39:26.734038644Z" level=info msg="StartContainer for \"03aa7bda3c5a6f73a11246687005bf7ddb9a273174f1b681b71e3bf0c2085ad3\"" Apr 24 23:39:26.742852 systemd[1]: Started cri-containerd-8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814.scope - libcontainer container 8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814. Apr 24 23:39:26.760883 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:26.774270 systemd[1]: Started cri-containerd-03aa7bda3c5a6f73a11246687005bf7ddb9a273174f1b681b71e3bf0c2085ad3.scope - libcontainer container 03aa7bda3c5a6f73a11246687005bf7ddb9a273174f1b681b71e3bf0c2085ad3. Apr 24 23:39:26.813985 containerd[1463]: time="2026-04-24T23:39:26.813580726Z" level=info msg="StopPodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\"" Apr 24 23:39:26.814413 containerd[1463]: time="2026-04-24T23:39:26.814376546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-htcpb,Uid:67600210-1e44-4182-a447-6bd334f7adf6,Namespace:kube-system,Attempt:1,} returns sandbox id \"8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814\"" Apr 24 23:39:26.816075 kubelet[2510]: E0424 23:39:26.815729 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:26.828857 containerd[1463]: time="2026-04-24T23:39:26.828592583Z" level=info msg="CreateContainer within sandbox \"8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:39:26.848678 containerd[1463]: time="2026-04-24T23:39:26.848507811Z" level=info msg="CreateContainer within sandbox \"8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9868073b73be0839d5fa632aca9aa10f275d3f694566d9e48dbd4589135f91cb\"" Apr 24 23:39:26.850731 containerd[1463]: time="2026-04-24T23:39:26.849985069Z" level=info msg="StartContainer for \"9868073b73be0839d5fa632aca9aa10f275d3f694566d9e48dbd4589135f91cb\"" Apr 24 23:39:26.883721 containerd[1463]: time="2026-04-24T23:39:26.883606280Z" level=info msg="StartContainer for \"03aa7bda3c5a6f73a11246687005bf7ddb9a273174f1b681b71e3bf0c2085ad3\" returns successfully" Apr 24 23:39:26.899580 systemd[1]: Started cri-containerd-9868073b73be0839d5fa632aca9aa10f275d3f694566d9e48dbd4589135f91cb.scope - libcontainer container 9868073b73be0839d5fa632aca9aa10f275d3f694566d9e48dbd4589135f91cb. Apr 24 23:39:26.959967 containerd[1463]: time="2026-04-24T23:39:26.959483200Z" level=info msg="StartContainer for \"9868073b73be0839d5fa632aca9aa10f275d3f694566d9e48dbd4589135f91cb\" returns successfully" Apr 24 23:39:26.965858 systemd[1]: run-netns-cni\x2d54c1e126\x2d70bd\x2dd8d4\x2d5765\x2d4ea51159b58c.mount: Deactivated successfully. Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.908 [INFO][5221] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.909 [INFO][5221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" iface="eth0" netns="/var/run/netns/cni-e6707127-f10e-794f-1305-7c3fbd84cab9" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.909 [INFO][5221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" iface="eth0" netns="/var/run/netns/cni-e6707127-f10e-794f-1305-7c3fbd84cab9" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.910 [INFO][5221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" iface="eth0" netns="/var/run/netns/cni-e6707127-f10e-794f-1305-7c3fbd84cab9" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.910 [INFO][5221] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.910 [INFO][5221] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.970 [INFO][5266] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.973 [INFO][5266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.973 [INFO][5266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.982 [WARNING][5266] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:26.982 [INFO][5266] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:27.011 [INFO][5266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:27.034685 containerd[1463]: 2026-04-24 23:39:27.022 [INFO][5221] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:27.034685 containerd[1463]: time="2026-04-24T23:39:27.033858110Z" level=info msg="TearDown network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" successfully" Apr 24 23:39:27.034685 containerd[1463]: time="2026-04-24T23:39:27.033900115Z" level=info msg="StopPodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" returns successfully" Apr 24 23:39:27.039526 systemd[1]: run-netns-cni\x2de6707127\x2df10e\x2d794f\x2d1305\x2d7c3fbd84cab9.mount: Deactivated successfully. Apr 24 23:39:27.056043 containerd[1463]: time="2026-04-24T23:39:27.055986982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7874b6c748-zjlx4,Uid:90155a0c-e2a3-4e4e-bd3b-19b850d41c9a,Namespace:calico-system,Attempt:1,}" Apr 24 23:39:27.098572 kubelet[2510]: E0424 23:39:27.097256 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:27.115056 kubelet[2510]: E0424 23:39:27.113426 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:27.115265 kubelet[2510]: I0424 23:39:27.115168 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-88785c5b9-j7rvq" podStartSLOduration=35.115150936 podStartE2EDuration="35.115150936s" podCreationTimestamp="2026-04-24 23:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:39:27.099375209 +0000 UTC m=+50.464451879" watchObservedRunningTime="2026-04-24 23:39:27.115150936 +0000 UTC m=+50.480227602" Apr 24 23:39:27.141299 kubelet[2510]: I0424 23:39:27.140922 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-htcpb" podStartSLOduration=45.138104271 podStartE2EDuration="45.138104271s" podCreationTimestamp="2026-04-24 23:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:39:27.13762875 +0000 UTC m=+50.502705420" watchObservedRunningTime="2026-04-24 23:39:27.138104271 +0000 UTC m=+50.503180940" Apr 24 23:39:27.209972 kubelet[2510]: I0424 23:39:27.209879 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4kc84" podStartSLOduration=45.20985879 podStartE2EDuration="45.20985879s" podCreationTimestamp="2026-04-24 23:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:39:27.160219994 +0000 UTC m=+50.525296664" watchObservedRunningTime="2026-04-24 23:39:27.20985879 +0000 UTC m=+50.574935461" Apr 24 23:39:27.383776 systemd-networkd[1389]: cali0e344f038d3: Link UP Apr 24 23:39:27.383976 systemd-networkd[1389]: cali0e344f038d3: Gained carrier Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.221 [INFO][5305] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0 calico-kube-controllers-7874b6c748- calico-system 90155a0c-e2a3-4e4e-bd3b-19b850d41c9a 1082 0 2026-04-24 23:38:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7874b6c748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7874b6c748-zjlx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0e344f038d3 [] [] }} ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.221 [INFO][5305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.278 [INFO][5321] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" HandleID="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.291 [INFO][5321] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" HandleID="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7874b6c748-zjlx4", "timestamp":"2026-04-24 23:39:27.278921207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040af20)} Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.291 [INFO][5321] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.291 [INFO][5321] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.291 [INFO][5321] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.301 [INFO][5321] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.307 [INFO][5321] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.314 [INFO][5321] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.327 [INFO][5321] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.332 [INFO][5321] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.332 [INFO][5321] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.348 [INFO][5321] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.355 [INFO][5321] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.368 [INFO][5321] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.368 [INFO][5321] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" host="localhost" Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.368 [INFO][5321] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:27.409413 containerd[1463]: 2026-04-24 23:39:27.368 [INFO][5321] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" HandleID="k8s-pod-network.c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.410248 containerd[1463]: 2026-04-24 23:39:27.378 [INFO][5305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0", GenerateName:"calico-kube-controllers-7874b6c748-", Namespace:"calico-system", SelfLink:"", UID:"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7874b6c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7874b6c748-zjlx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e344f038d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:27.410248 containerd[1463]: 2026-04-24 23:39:27.379 [INFO][5305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.410248 containerd[1463]: 2026-04-24 23:39:27.379 [INFO][5305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e344f038d3 ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.410248 containerd[1463]: 2026-04-24 23:39:27.384 [INFO][5305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.410248 containerd[1463]: 2026-04-24 23:39:27.384 [INFO][5305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0", GenerateName:"calico-kube-controllers-7874b6c748-", Namespace:"calico-system", SelfLink:"", UID:"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7874b6c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad", Pod:"calico-kube-controllers-7874b6c748-zjlx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e344f038d3", MAC:"e6:6a:90:24:2f:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:27.410248 containerd[1463]: 2026-04-24 23:39:27.406 [INFO][5305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad" Namespace="calico-system" Pod="calico-kube-controllers-7874b6c748-zjlx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:27.439006 containerd[1463]: time="2026-04-24T23:39:27.438901849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:39:27.439280 containerd[1463]: time="2026-04-24T23:39:27.438960905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:39:27.439280 containerd[1463]: time="2026-04-24T23:39:27.438977290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:27.439280 containerd[1463]: time="2026-04-24T23:39:27.439046422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:39:27.480323 systemd[1]: Started cri-containerd-c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad.scope - libcontainer container c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad. Apr 24 23:39:27.509977 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:39:27.512365 systemd-networkd[1389]: calie554e77e27f: Gained IPv6LL Apr 24 23:39:27.571842 containerd[1463]: time="2026-04-24T23:39:27.571791888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7874b6c748-zjlx4,Uid:90155a0c-e2a3-4e4e-bd3b-19b850d41c9a,Namespace:calico-system,Attempt:1,} returns sandbox id \"c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad\"" Apr 24 23:39:27.832612 systemd-networkd[1389]: caliaeab7766366: Gained IPv6LL Apr 24 23:39:27.964755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355575349.mount: Deactivated successfully. Apr 24 23:39:28.118360 kubelet[2510]: E0424 23:39:28.117996 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:28.118789 kubelet[2510]: E0424 23:39:28.118778 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:28.197382 containerd[1463]: time="2026-04-24T23:39:28.197331888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:28.199415 containerd[1463]: time="2026-04-24T23:39:28.199282637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:39:28.202289 containerd[1463]: time="2026-04-24T23:39:28.201834322Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:28.206882 containerd[1463]: time="2026-04-24T23:39:28.206842325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:28.207821 containerd[1463]: time="2026-04-24T23:39:28.207778711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.717992972s" Apr 24 23:39:28.207880 containerd[1463]: time="2026-04-24T23:39:28.207822438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:39:28.210942 containerd[1463]: time="2026-04-24T23:39:28.210917960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:39:28.216171 containerd[1463]: time="2026-04-24T23:39:28.216079997Z" level=info msg="CreateContainer within sandbox \"31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:39:28.235536 containerd[1463]: time="2026-04-24T23:39:28.235474925Z" level=info msg="CreateContainer within sandbox \"31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3883c134d587ef59019dec84db0de805698b521f7d3fdf21dddb19c24c2fb54c\"" Apr 24 23:39:28.236796 containerd[1463]: time="2026-04-24T23:39:28.236775028Z" level=info msg="StartContainer for \"3883c134d587ef59019dec84db0de805698b521f7d3fdf21dddb19c24c2fb54c\"" Apr 24 23:39:28.292473 systemd[1]: Started cri-containerd-3883c134d587ef59019dec84db0de805698b521f7d3fdf21dddb19c24c2fb54c.scope - libcontainer container 3883c134d587ef59019dec84db0de805698b521f7d3fdf21dddb19c24c2fb54c. Apr 24 23:39:28.340429 containerd[1463]: time="2026-04-24T23:39:28.340391488Z" level=info msg="StartContainer for \"3883c134d587ef59019dec84db0de805698b521f7d3fdf21dddb19c24c2fb54c\" returns successfully" Apr 24 23:39:28.472393 systemd-networkd[1389]: cali3d3275631d8: Gained IPv6LL Apr 24 23:39:28.600453 systemd-networkd[1389]: calief58c1de418: Gained IPv6LL Apr 24 23:39:28.705546 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:41922.service - OpenSSH per-connection server daemon (10.0.0.1:41922). Apr 24 23:39:28.762277 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 41922 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:28.764351 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:28.770770 systemd-logind[1453]: New session 10 of user core. Apr 24 23:39:28.776306 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:39:29.020954 sshd[5455]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:29.024362 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:41922.service: Deactivated successfully. Apr 24 23:39:29.025841 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:39:29.026400 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:39:29.027193 systemd-logind[1453]: Removed session 10. Apr 24 23:39:29.048930 systemd-networkd[1389]: cali0e344f038d3: Gained IPv6LL Apr 24 23:39:29.126042 kubelet[2510]: E0424 23:39:29.125851 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:29.127702 kubelet[2510]: E0424 23:39:29.125705 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:31.125735 containerd[1463]: time="2026-04-24T23:39:31.125583979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:31.127662 containerd[1463]: time="2026-04-24T23:39:31.126289526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:39:31.127736 containerd[1463]: time="2026-04-24T23:39:31.127674509Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:31.130090 containerd[1463]: time="2026-04-24T23:39:31.130055173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:31.130752 containerd[1463]: time="2026-04-24T23:39:31.130709524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.919716773s" Apr 24 23:39:31.130752 containerd[1463]: time="2026-04-24T23:39:31.130739436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:39:31.131713 containerd[1463]: time="2026-04-24T23:39:31.131671734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:39:31.135229 containerd[1463]: time="2026-04-24T23:39:31.135204410Z" level=info msg="CreateContainer within sandbox \"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:39:31.173637 containerd[1463]: time="2026-04-24T23:39:31.173508323Z" level=info msg="CreateContainer within sandbox \"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4c6a057f1f039d5ad76dc9957a9d4dc7ed412945f67b23c6f61373d2ee88879a\"" Apr 24 23:39:31.175028 containerd[1463]: time="2026-04-24T23:39:31.174914240Z" level=info msg="StartContainer for \"4c6a057f1f039d5ad76dc9957a9d4dc7ed412945f67b23c6f61373d2ee88879a\"" Apr 24 23:39:31.224351 systemd[1]: Started cri-containerd-4c6a057f1f039d5ad76dc9957a9d4dc7ed412945f67b23c6f61373d2ee88879a.scope - libcontainer container 4c6a057f1f039d5ad76dc9957a9d4dc7ed412945f67b23c6f61373d2ee88879a. Apr 24 23:39:31.249746 containerd[1463]: time="2026-04-24T23:39:31.249676066Z" level=info msg="StartContainer for \"4c6a057f1f039d5ad76dc9957a9d4dc7ed412945f67b23c6f61373d2ee88879a\" returns successfully" Apr 24 23:39:33.147387 containerd[1463]: time="2026-04-24T23:39:33.147313675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:33.148994 containerd[1463]: time="2026-04-24T23:39:33.148584274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:39:33.150512 containerd[1463]: time="2026-04-24T23:39:33.150475289Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:33.153528 containerd[1463]: time="2026-04-24T23:39:33.153477240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:33.154021 containerd[1463]: time="2026-04-24T23:39:33.153987998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.022269613s" Apr 24 23:39:33.154021 containerd[1463]: time="2026-04-24T23:39:33.154020181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:39:33.155589 containerd[1463]: time="2026-04-24T23:39:33.155506171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:39:33.177722 containerd[1463]: time="2026-04-24T23:39:33.177666687Z" level=info msg="CreateContainer within sandbox \"c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:39:33.197608 containerd[1463]: time="2026-04-24T23:39:33.197549463Z" level=info msg="CreateContainer within sandbox \"c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6bde857b9097b213692637da977c29fb3daa9a1d212815cd5a061894a8c6a84e\"" Apr 24 23:39:33.198065 containerd[1463]: time="2026-04-24T23:39:33.198044111Z" level=info msg="StartContainer for \"6bde857b9097b213692637da977c29fb3daa9a1d212815cd5a061894a8c6a84e\"" Apr 24 23:39:33.247287 systemd[1]: Started cri-containerd-6bde857b9097b213692637da977c29fb3daa9a1d212815cd5a061894a8c6a84e.scope - libcontainer container 6bde857b9097b213692637da977c29fb3daa9a1d212815cd5a061894a8c6a84e. Apr 24 23:39:33.292012 containerd[1463]: time="2026-04-24T23:39:33.291898399Z" level=info msg="StartContainer for \"6bde857b9097b213692637da977c29fb3daa9a1d212815cd5a061894a8c6a84e\" returns successfully" Apr 24 23:39:34.033769 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:36286.service - OpenSSH per-connection server daemon (10.0.0.1:36286). Apr 24 23:39:34.081900 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 36286 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:34.084085 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:34.088951 systemd-logind[1453]: New session 11 of user core. Apr 24 23:39:34.095296 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:39:34.185826 kubelet[2510]: I0424 23:39:34.185603 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-7nrvs" podStartSLOduration=37.22871543 podStartE2EDuration="41.185586825s" podCreationTimestamp="2026-04-24 23:38:53 +0000 UTC" firstStartedPulling="2026-04-24 23:39:24.253895292 +0000 UTC m=+47.618971951" lastFinishedPulling="2026-04-24 23:39:28.210766683 +0000 UTC m=+51.575843346" observedRunningTime="2026-04-24 23:39:29.154486013 +0000 UTC m=+52.519562679" watchObservedRunningTime="2026-04-24 23:39:34.185586825 +0000 UTC m=+57.550663492" Apr 24 23:39:34.186874 kubelet[2510]: I0424 23:39:34.186677 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7874b6c748-zjlx4" podStartSLOduration=35.604450483 podStartE2EDuration="41.186619358s" podCreationTimestamp="2026-04-24 23:38:53 +0000 UTC" firstStartedPulling="2026-04-24 23:39:27.573181751 +0000 UTC m=+50.938258410" lastFinishedPulling="2026-04-24 23:39:33.155350614 +0000 UTC m=+56.520427285" observedRunningTime="2026-04-24 23:39:34.185318678 +0000 UTC m=+57.550395344" watchObservedRunningTime="2026-04-24 23:39:34.186619358 +0000 UTC m=+57.551696028" Apr 24 23:39:34.360749 sshd[5639]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:34.367954 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:36286.service: Deactivated successfully. Apr 24 23:39:34.369872 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:39:34.370633 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:39:34.371636 systemd-logind[1453]: Removed session 11. Apr 24 23:39:34.670613 containerd[1463]: time="2026-04-24T23:39:34.670470759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:34.672258 containerd[1463]: time="2026-04-24T23:39:34.672063717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:39:34.673481 containerd[1463]: time="2026-04-24T23:39:34.673444892Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:34.676323 containerd[1463]: time="2026-04-24T23:39:34.676274059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:34.676803 containerd[1463]: time="2026-04-24T23:39:34.676756348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.521212738s" Apr 24 23:39:34.676803 containerd[1463]: time="2026-04-24T23:39:34.676790474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:39:34.682987 containerd[1463]: time="2026-04-24T23:39:34.682887575Z" level=info msg="CreateContainer within sandbox \"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:39:34.701714 containerd[1463]: time="2026-04-24T23:39:34.701635895Z" level=info msg="CreateContainer within sandbox \"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d112cf3611fdfc52f3706adb4e6067d4bc72959f9064cd57ab4fedc665f20be5\"" Apr 24 23:39:34.702482 containerd[1463]: time="2026-04-24T23:39:34.702432880Z" level=info msg="StartContainer for \"d112cf3611fdfc52f3706adb4e6067d4bc72959f9064cd57ab4fedc665f20be5\"" Apr 24 23:39:34.733558 systemd[1]: Started cri-containerd-d112cf3611fdfc52f3706adb4e6067d4bc72959f9064cd57ab4fedc665f20be5.scope - libcontainer container d112cf3611fdfc52f3706adb4e6067d4bc72959f9064cd57ab4fedc665f20be5. Apr 24 23:39:34.799503 containerd[1463]: time="2026-04-24T23:39:34.799014974Z" level=info msg="StartContainer for \"d112cf3611fdfc52f3706adb4e6067d4bc72959f9064cd57ab4fedc665f20be5\" returns successfully" Apr 24 23:39:34.968633 kubelet[2510]: I0424 23:39:34.968486 2510 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:39:34.969845 kubelet[2510]: I0424 23:39:34.969813 2510 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:39:35.200138 kubelet[2510]: I0424 23:39:35.199960 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b2j2n" podStartSLOduration=34.121935189 podStartE2EDuration="42.19994605s" podCreationTimestamp="2026-04-24 23:38:53 +0000 UTC" firstStartedPulling="2026-04-24 23:39:26.599893541 +0000 UTC m=+49.964970200" lastFinishedPulling="2026-04-24 23:39:34.677904402 +0000 UTC m=+58.042981061" observedRunningTime="2026-04-24 23:39:35.197178453 +0000 UTC m=+58.562255124" watchObservedRunningTime="2026-04-24 23:39:35.19994605 +0000 UTC m=+58.565022720" Apr 24 23:39:36.794380 containerd[1463]: time="2026-04-24T23:39:36.794200296Z" level=info msg="StopPodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\"" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.856 [WARNING][5731] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4kc84-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ef9041b-388f-45eb-96a4-401487c8a29a", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4", Pod:"coredns-66bc5c9577-4kc84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeab7766366", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.856 [INFO][5731] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.856 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" iface="eth0" netns="" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.856 [INFO][5731] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.856 [INFO][5731] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.918 [INFO][5741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.919 [INFO][5741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.919 [INFO][5741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.934 [WARNING][5741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.934 [INFO][5741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.937 [INFO][5741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:36.943029 containerd[1463]: 2026-04-24 23:39:36.939 [INFO][5731] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:36.944172 containerd[1463]: time="2026-04-24T23:39:36.943095502Z" level=info msg="TearDown network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" successfully" Apr 24 23:39:36.944172 containerd[1463]: time="2026-04-24T23:39:36.943216415Z" level=info msg="StopPodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" returns successfully" Apr 24 23:39:37.009583 containerd[1463]: time="2026-04-24T23:39:37.009329138Z" level=info msg="RemovePodSandbox for \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\"" Apr 24 23:39:37.013320 containerd[1463]: time="2026-04-24T23:39:37.013060565Z" level=info msg="Forcibly stopping sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\"" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.091 [WARNING][5758] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4kc84-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ef9041b-388f-45eb-96a4-401487c8a29a", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cc3234975e725cb9418f0ca49087b15769a3462fd1b816fd9d53eb2bc7943e4", Pod:"coredns-66bc5c9577-4kc84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeab7766366", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.092 [INFO][5758] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.092 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" iface="eth0" netns="" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.092 [INFO][5758] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.092 [INFO][5758] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.131 [INFO][5767] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.131 [INFO][5767] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.132 [INFO][5767] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.146 [WARNING][5767] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.147 [INFO][5767] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" HandleID="k8s-pod-network.cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Workload="localhost-k8s-coredns--66bc5c9577--4kc84-eth0" Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.149 [INFO][5767] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:37.152999 containerd[1463]: 2026-04-24 23:39:37.151 [INFO][5758] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25" Apr 24 23:39:37.153635 containerd[1463]: time="2026-04-24T23:39:37.153093324Z" level=info msg="TearDown network for sandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" successfully" Apr 24 23:39:37.176580 containerd[1463]: time="2026-04-24T23:39:37.176284740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:37.177902 containerd[1463]: time="2026-04-24T23:39:37.176660881Z" level=info msg="RemovePodSandbox \"cf40c632e849b5b5c80ba62bd1f9b39c489235a6b3e22116bfb099dca604bb25\" returns successfully" Apr 24 23:39:37.196955 containerd[1463]: time="2026-04-24T23:39:37.196872919Z" level=info msg="StopPodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\"" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.297 [WARNING][5787] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc", Pod:"calico-apiserver-88785c5b9-pnfsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali38018d04c8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.298 [INFO][5787] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.298 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" iface="eth0" netns="" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.298 [INFO][5787] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.298 [INFO][5787] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.389 [INFO][5796] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.390 [INFO][5796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.390 [INFO][5796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.409 [WARNING][5796] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.409 [INFO][5796] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.418 [INFO][5796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:37.423322 containerd[1463]: 2026-04-24 23:39:37.421 [INFO][5787] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.423322 containerd[1463]: time="2026-04-24T23:39:37.423186225Z" level=info msg="TearDown network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" successfully" Apr 24 23:39:37.423322 containerd[1463]: time="2026-04-24T23:39:37.423224399Z" level=info msg="StopPodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" returns successfully" Apr 24 23:39:37.424532 containerd[1463]: time="2026-04-24T23:39:37.424428177Z" level=info msg="RemovePodSandbox for \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\"" Apr 24 23:39:37.424532 containerd[1463]: time="2026-04-24T23:39:37.424468601Z" level=info msg="Forcibly stopping sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\"" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.481 [WARNING][5814] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"d2fbecdb-999b-4ff1-ac90-f81a5cfb1384", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbdc4fc034e5b8983bcb340ce3b720ce7525528a892d983d977e0f891ab891fc", Pod:"calico-apiserver-88785c5b9-pnfsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali38018d04c8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.483 [INFO][5814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.483 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" iface="eth0" netns="" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.483 [INFO][5814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.483 [INFO][5814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.505 [INFO][5822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.505 [INFO][5822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.505 [INFO][5822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.520 [WARNING][5822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.520 [INFO][5822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" HandleID="k8s-pod-network.b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Workload="localhost-k8s-calico--apiserver--88785c5b9--pnfsw-eth0" Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.523 [INFO][5822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:37.526753 containerd[1463]: 2026-04-24 23:39:37.525 [INFO][5814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833" Apr 24 23:39:37.527679 containerd[1463]: time="2026-04-24T23:39:37.526789789Z" level=info msg="TearDown network for sandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" successfully" Apr 24 23:39:37.540858 containerd[1463]: time="2026-04-24T23:39:37.540764840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:37.540858 containerd[1463]: time="2026-04-24T23:39:37.540830235Z" level=info msg="RemovePodSandbox \"b992311ce16b7d914d109a85f07fe4958597dee456547d4c06aebd19b1b6f833\" returns successfully" Apr 24 23:39:37.541484 containerd[1463]: time="2026-04-24T23:39:37.541465529Z" level=info msg="StopPodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\"" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.629 [WARNING][5845] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2j2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"432f514d-2771-4770-8cbd-167f3881d2c4", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac", Pod:"csi-node-driver-b2j2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie554e77e27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.630 [INFO][5845] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.630 [INFO][5845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" iface="eth0" netns="" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.630 [INFO][5845] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.630 [INFO][5845] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.653 [INFO][5853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.654 [INFO][5853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.654 [INFO][5853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.662 [WARNING][5853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.662 [INFO][5853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.664 [INFO][5853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:37.667591 containerd[1463]: 2026-04-24 23:39:37.665 [INFO][5845] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.668050 containerd[1463]: time="2026-04-24T23:39:37.667595681Z" level=info msg="TearDown network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" successfully" Apr 24 23:39:37.668050 containerd[1463]: time="2026-04-24T23:39:37.667628129Z" level=info msg="StopPodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" returns successfully" Apr 24 23:39:37.668489 containerd[1463]: time="2026-04-24T23:39:37.668463686Z" level=info msg="RemovePodSandbox for \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\"" Apr 24 23:39:37.668536 containerd[1463]: time="2026-04-24T23:39:37.668498378Z" level=info msg="Forcibly stopping sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\"" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.767 [WARNING][5871] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2j2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"432f514d-2771-4770-8cbd-167f3881d2c4", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c79c2dbe9048399717f901652ed4c1557eeb4ab156034337a3fec1be0c2893ac", Pod:"csi-node-driver-b2j2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie554e77e27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.767 [INFO][5871] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.767 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" iface="eth0" netns="" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.767 [INFO][5871] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.767 [INFO][5871] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.789 [INFO][5880] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.790 [INFO][5880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.790 [INFO][5880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.797 [WARNING][5880] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.797 [INFO][5880] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" HandleID="k8s-pod-network.b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Workload="localhost-k8s-csi--node--driver--b2j2n-eth0" Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.805 [INFO][5880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:37.813460 containerd[1463]: 2026-04-24 23:39:37.810 [INFO][5871] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994" Apr 24 23:39:37.814096 containerd[1463]: time="2026-04-24T23:39:37.813492358Z" level=info msg="TearDown network for sandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" successfully" Apr 24 23:39:37.819505 containerd[1463]: time="2026-04-24T23:39:37.819408997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:37.820764 containerd[1463]: time="2026-04-24T23:39:37.819603924Z" level=info msg="RemovePodSandbox \"b60e9c9b32a20e2a8e5f561dcbf9a23fbd79de988caa031ff3e337ad844a8994\" returns successfully" Apr 24 23:39:37.821279 containerd[1463]: time="2026-04-24T23:39:37.821247782Z" level=info msg="StopPodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\"" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.890 [WARNING][5897] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"552c601a-b830-4d6c-90dc-907cfec7edbf", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1", Pod:"goldmane-cccfbd5cf-7nrvs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15711d74d09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.890 [INFO][5897] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.890 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" iface="eth0" netns="" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.890 [INFO][5897] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.890 [INFO][5897] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.915 [INFO][5913] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.915 [INFO][5913] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.915 [INFO][5913] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.920 [WARNING][5913] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.920 [INFO][5913] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.922 [INFO][5913] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:37.926286 containerd[1463]: 2026-04-24 23:39:37.923 [INFO][5897] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:37.926286 containerd[1463]: time="2026-04-24T23:39:37.926042496Z" level=info msg="TearDown network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" successfully" Apr 24 23:39:37.926286 containerd[1463]: time="2026-04-24T23:39:37.926062272Z" level=info msg="StopPodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" returns successfully" Apr 24 23:39:37.930261 containerd[1463]: time="2026-04-24T23:39:37.927825942Z" level=info msg="RemovePodSandbox for \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\"" Apr 24 23:39:37.930261 containerd[1463]: time="2026-04-24T23:39:37.927898231Z" level=info msg="Forcibly stopping sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\"" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:37.969 [WARNING][5931] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"552c601a-b830-4d6c-90dc-907cfec7edbf", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31231cf598d4ce72fb23895d981f1cbbf59692473f7521db3fa005017a82b2f1", Pod:"goldmane-cccfbd5cf-7nrvs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15711d74d09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:37.969 [INFO][5931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:37.969 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" iface="eth0" netns="" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:37.969 [INFO][5931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:37.969 [INFO][5931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.030 [INFO][5939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.031 [INFO][5939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.031 [INFO][5939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.043 [WARNING][5939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.044 [INFO][5939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" HandleID="k8s-pod-network.cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Workload="localhost-k8s-goldmane--cccfbd5cf--7nrvs-eth0" Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.045 [INFO][5939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.052709 containerd[1463]: 2026-04-24 23:39:38.050 [INFO][5931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5" Apr 24 23:39:38.053631 containerd[1463]: time="2026-04-24T23:39:38.052854512Z" level=info msg="TearDown network for sandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" successfully" Apr 24 23:39:38.058572 containerd[1463]: time="2026-04-24T23:39:38.058496849Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:38.059195 containerd[1463]: time="2026-04-24T23:39:38.058609015Z" level=info msg="RemovePodSandbox \"cf74189f06da7c10cf2cbc7ec98f5a13c0c99a6b080232db38ebe9995ec893e5\" returns successfully" Apr 24 23:39:38.059921 containerd[1463]: time="2026-04-24T23:39:38.059820327Z" level=info msg="StopPodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\"" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.118 [WARNING][5956] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0", GenerateName:"calico-kube-controllers-7874b6c748-", Namespace:"calico-system", SelfLink:"", UID:"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7874b6c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad", Pod:"calico-kube-controllers-7874b6c748-zjlx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e344f038d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.119 [INFO][5956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.119 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" iface="eth0" netns="" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.119 [INFO][5956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.119 [INFO][5956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.166 [INFO][5968] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.166 [INFO][5968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.166 [INFO][5968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.173 [WARNING][5968] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.173 [INFO][5968] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.175 [INFO][5968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.177814 containerd[1463]: 2026-04-24 23:39:38.176 [INFO][5956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.178281 containerd[1463]: time="2026-04-24T23:39:38.177836598Z" level=info msg="TearDown network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" successfully" Apr 24 23:39:38.178281 containerd[1463]: time="2026-04-24T23:39:38.177855665Z" level=info msg="StopPodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" returns successfully" Apr 24 23:39:38.178587 containerd[1463]: time="2026-04-24T23:39:38.178561861Z" level=info msg="RemovePodSandbox for \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\"" Apr 24 23:39:38.178629 containerd[1463]: time="2026-04-24T23:39:38.178593824Z" level=info msg="Forcibly stopping sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\"" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.266 [WARNING][5991] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0", GenerateName:"calico-kube-controllers-7874b6c748-", Namespace:"calico-system", SelfLink:"", UID:"90155a0c-e2a3-4e4e-bd3b-19b850d41c9a", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7874b6c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9620da3d1e90aded37a64a1fd7c1e066d81ec42952f5b15497c4eaf46f4daad", Pod:"calico-kube-controllers-7874b6c748-zjlx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e344f038d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.267 [INFO][5991] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.267 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" iface="eth0" netns="" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.267 [INFO][5991] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.267 [INFO][5991] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.291 [INFO][6002] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.291 [INFO][6002] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.291 [INFO][6002] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.296 [WARNING][6002] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.296 [INFO][6002] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" HandleID="k8s-pod-network.2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Workload="localhost-k8s-calico--kube--controllers--7874b6c748--zjlx4-eth0" Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.298 [INFO][6002] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.301891 containerd[1463]: 2026-04-24 23:39:38.299 [INFO][5991] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8" Apr 24 23:39:38.302341 containerd[1463]: time="2026-04-24T23:39:38.301939889Z" level=info msg="TearDown network for sandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" successfully" Apr 24 23:39:38.306312 containerd[1463]: time="2026-04-24T23:39:38.306274710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:38.306402 containerd[1463]: time="2026-04-24T23:39:38.306355355Z" level=info msg="RemovePodSandbox \"2180e2333a01683972e80eac4bfd6cfce16916e736ad6c60496748b1ddc964f8\" returns successfully" Apr 24 23:39:38.309271 containerd[1463]: time="2026-04-24T23:39:38.309239785Z" level=info msg="StopPodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\"" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.347 [WARNING][6019] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--htcpb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67600210-1e44-4182-a447-6bd334f7adf6", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814", Pod:"coredns-66bc5c9577-htcpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief58c1de418", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.348 [INFO][6019] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.348 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" iface="eth0" netns="" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.348 [INFO][6019] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.348 [INFO][6019] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.374 [INFO][6027] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.374 [INFO][6027] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.374 [INFO][6027] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.383 [WARNING][6027] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.383 [INFO][6027] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.385 [INFO][6027] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.388974 containerd[1463]: 2026-04-24 23:39:38.387 [INFO][6019] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.389540 containerd[1463]: time="2026-04-24T23:39:38.389065613Z" level=info msg="TearDown network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" successfully" Apr 24 23:39:38.389540 containerd[1463]: time="2026-04-24T23:39:38.389092037Z" level=info msg="StopPodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" returns successfully" Apr 24 23:39:38.389901 containerd[1463]: time="2026-04-24T23:39:38.389870349Z" level=info msg="RemovePodSandbox for \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\"" Apr 24 23:39:38.389981 containerd[1463]: time="2026-04-24T23:39:38.389906441Z" level=info msg="Forcibly stopping sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\"" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.479 [WARNING][6046] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--htcpb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67600210-1e44-4182-a447-6bd334f7adf6", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a2588598b2e88984c149fc5209e4d8842314398cf9c3bcdd0edaa9927e76814", Pod:"coredns-66bc5c9577-htcpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief58c1de418", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.479 [INFO][6046] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.479 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" iface="eth0" netns="" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.479 [INFO][6046] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.479 [INFO][6046] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.506 [INFO][6055] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.506 [INFO][6055] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.506 [INFO][6055] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.522 [WARNING][6055] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.523 [INFO][6055] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" HandleID="k8s-pod-network.fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Workload="localhost-k8s-coredns--66bc5c9577--htcpb-eth0" Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.526 [INFO][6055] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.529888 containerd[1463]: 2026-04-24 23:39:38.527 [INFO][6046] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c" Apr 24 23:39:38.531938 containerd[1463]: time="2026-04-24T23:39:38.529873865Z" level=info msg="TearDown network for sandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" successfully" Apr 24 23:39:38.535853 containerd[1463]: time="2026-04-24T23:39:38.535794688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:38.536416 containerd[1463]: time="2026-04-24T23:39:38.536012095Z" level=info msg="RemovePodSandbox \"fb1434531e95c9da1ef880dba4684da20d40ea2d64a94221cc5f8276fb8f324c\" returns successfully" Apr 24 23:39:38.537182 containerd[1463]: time="2026-04-24T23:39:38.537155381Z" level=info msg="StopPodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\"" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.585 [WARNING][6072] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" WorkloadEndpoint="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.586 [INFO][6072] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.586 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" iface="eth0" netns="" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.586 [INFO][6072] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.586 [INFO][6072] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.638 [INFO][6081] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.638 [INFO][6081] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.638 [INFO][6081] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.655 [WARNING][6081] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.655 [INFO][6081] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.658 [INFO][6081] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.661907 containerd[1463]: 2026-04-24 23:39:38.659 [INFO][6072] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.662352 containerd[1463]: time="2026-04-24T23:39:38.661966113Z" level=info msg="TearDown network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" successfully" Apr 24 23:39:38.662352 containerd[1463]: time="2026-04-24T23:39:38.661989349Z" level=info msg="StopPodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" returns successfully" Apr 24 23:39:38.662825 containerd[1463]: time="2026-04-24T23:39:38.662795245Z" level=info msg="RemovePodSandbox for \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\"" Apr 24 23:39:38.662860 containerd[1463]: time="2026-04-24T23:39:38.662830666Z" level=info msg="Forcibly stopping sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\"" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.724 [WARNING][6098] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" WorkloadEndpoint="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.724 [INFO][6098] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.724 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" iface="eth0" netns="" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.724 [INFO][6098] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.724 [INFO][6098] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.752 [INFO][6106] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.752 [INFO][6106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.752 [INFO][6106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.767 [WARNING][6106] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.767 [INFO][6106] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" HandleID="k8s-pod-network.f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Workload="localhost-k8s-whisker--7bffb8c7cd--rxlxx-eth0" Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.787 [INFO][6106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.793268 containerd[1463]: 2026-04-24 23:39:38.790 [INFO][6098] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429" Apr 24 23:39:38.794180 containerd[1463]: time="2026-04-24T23:39:38.793097980Z" level=info msg="TearDown network for sandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" successfully" Apr 24 23:39:38.808771 containerd[1463]: time="2026-04-24T23:39:38.808663634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:38.809549 containerd[1463]: time="2026-04-24T23:39:38.808798986Z" level=info msg="RemovePodSandbox \"f6a213f5933ae3fab64c76bc49f05afd3438c441afc2ab5c08d269b33b220429\" returns successfully" Apr 24 23:39:38.809549 containerd[1463]: time="2026-04-24T23:39:38.809473650Z" level=info msg="StopPodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\"" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.864 [WARNING][6124] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"930956ca-d26d-4366-9b81-420c40db0a9a", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82", Pod:"calico-apiserver-88785c5b9-j7rvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d3275631d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.865 [INFO][6124] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.865 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" iface="eth0" netns="" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.865 [INFO][6124] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.865 [INFO][6124] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.902 [INFO][6132] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.903 [INFO][6132] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.903 [INFO][6132] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.963 [WARNING][6132] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.963 [INFO][6132] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.973 [INFO][6132] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:38.978845 containerd[1463]: 2026-04-24 23:39:38.975 [INFO][6124] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:38.978845 containerd[1463]: time="2026-04-24T23:39:38.978876017Z" level=info msg="TearDown network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" successfully" Apr 24 23:39:38.980898 containerd[1463]: time="2026-04-24T23:39:38.978908046Z" level=info msg="StopPodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" returns successfully" Apr 24 23:39:38.982494 containerd[1463]: time="2026-04-24T23:39:38.982376770Z" level=info msg="RemovePodSandbox for \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\"" Apr 24 23:39:38.982494 containerd[1463]: time="2026-04-24T23:39:38.982445456Z" level=info msg="Forcibly stopping sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\"" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.043 [WARNING][6149] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0", GenerateName:"calico-apiserver-88785c5b9-", Namespace:"calico-system", SelfLink:"", UID:"930956ca-d26d-4366-9b81-420c40db0a9a", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 38, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88785c5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cead0adc2eb1dc4765b80b7356c5459b2126ff1a3f7238c1e369ba17ca10a82", Pod:"calico-apiserver-88785c5b9-j7rvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d3275631d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.043 [INFO][6149] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.043 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" iface="eth0" netns="" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.043 [INFO][6149] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.043 [INFO][6149] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.066 [INFO][6157] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.066 [INFO][6157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.066 [INFO][6157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.071 [WARNING][6157] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.071 [INFO][6157] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" HandleID="k8s-pod-network.56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Workload="localhost-k8s-calico--apiserver--88785c5b9--j7rvq-eth0" Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.073 [INFO][6157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:39:39.077713 containerd[1463]: 2026-04-24 23:39:39.074 [INFO][6149] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002" Apr 24 23:39:39.077713 containerd[1463]: time="2026-04-24T23:39:39.077355186Z" level=info msg="TearDown network for sandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" successfully" Apr 24 23:39:39.085490 containerd[1463]: time="2026-04-24T23:39:39.085385259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:39:39.085490 containerd[1463]: time="2026-04-24T23:39:39.085495602Z" level=info msg="RemovePodSandbox \"56973ffb1c1d8709261b4545ceb46e69bad9b378403b4f4ebb9acf4a015a4002\" returns successfully" Apr 24 23:39:39.376992 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:36300.service - OpenSSH per-connection server daemon (10.0.0.1:36300). Apr 24 23:39:39.506826 sshd[6166]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:39.512066 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:39.522845 systemd-logind[1453]: New session 12 of user core. Apr 24 23:39:39.534407 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:39:39.785448 sshd[6166]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:39.797236 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:36300.service: Deactivated successfully. Apr 24 23:39:39.798604 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:39:39.799837 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:39:39.805352 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:51444.service - OpenSSH per-connection server daemon (10.0.0.1:51444). Apr 24 23:39:39.806170 systemd-logind[1453]: Removed session 12. Apr 24 23:39:39.833798 sshd[6181]: Accepted publickey for core from 10.0.0.1 port 51444 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:39.835037 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:39.843684 systemd-logind[1453]: New session 13 of user core. Apr 24 23:39:39.858280 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:39:40.057647 sshd[6181]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:40.067210 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:51444.service: Deactivated successfully. Apr 24 23:39:40.070500 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:39:40.072129 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:39:40.082427 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:51454.service - OpenSSH per-connection server daemon (10.0.0.1:51454). Apr 24 23:39:40.091606 systemd-logind[1453]: Removed session 13. Apr 24 23:39:40.144193 sshd[6193]: Accepted publickey for core from 10.0.0.1 port 51454 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:40.144933 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:40.157220 systemd-logind[1453]: New session 14 of user core. Apr 24 23:39:40.164765 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:39:40.289519 sshd[6193]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:40.292947 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:51454.service: Deactivated successfully. Apr 24 23:39:40.295017 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:39:40.295765 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:39:40.296536 systemd-logind[1453]: Removed session 14. Apr 24 23:39:44.031832 systemd[1]: run-containerd-runc-k8s.io-e2ea2fa68e0bab65b3d574828c24c6b62dc27accff749742412cb8657c73b761-runc.smmLv4.mount: Deactivated successfully. Apr 24 23:39:45.302818 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Apr 24 23:39:45.347840 sshd[6233]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:45.348980 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:45.352434 systemd-logind[1453]: New session 15 of user core. Apr 24 23:39:45.363668 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:39:45.511230 sshd[6233]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:45.520386 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:51470.service: Deactivated successfully. Apr 24 23:39:45.521741 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:39:45.523166 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:39:45.535419 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:51478.service - OpenSSH per-connection server daemon (10.0.0.1:51478). Apr 24 23:39:45.536139 systemd-logind[1453]: Removed session 15. Apr 24 23:39:45.570479 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 51478 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:45.572090 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:45.576346 systemd-logind[1453]: New session 16 of user core. Apr 24 23:39:45.585274 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:39:45.784083 sshd[6247]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:45.792517 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:51478.service: Deactivated successfully. Apr 24 23:39:45.793872 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:39:45.796806 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:39:45.808145 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:51484.service - OpenSSH per-connection server daemon (10.0.0.1:51484). Apr 24 23:39:45.809782 systemd-logind[1453]: Removed session 16. Apr 24 23:39:45.854010 sshd[6259]: Accepted publickey for core from 10.0.0.1 port 51484 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:45.856010 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:45.860619 systemd-logind[1453]: New session 17 of user core. Apr 24 23:39:45.869277 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:39:46.583880 sshd[6259]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:46.599384 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:51492.service - OpenSSH per-connection server daemon (10.0.0.1:51492). Apr 24 23:39:46.599742 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:51484.service: Deactivated successfully. Apr 24 23:39:46.608877 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:39:46.611944 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:39:46.615041 systemd-logind[1453]: Removed session 17. Apr 24 23:39:46.647368 sshd[6282]: Accepted publickey for core from 10.0.0.1 port 51492 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:46.648811 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:46.653363 systemd-logind[1453]: New session 18 of user core. Apr 24 23:39:46.662812 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:39:46.957259 sshd[6282]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:46.966278 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:51492.service: Deactivated successfully. Apr 24 23:39:46.968737 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:39:46.971997 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:39:46.977757 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:51508.service - OpenSSH per-connection server daemon (10.0.0.1:51508). Apr 24 23:39:46.978512 systemd-logind[1453]: Removed session 18. Apr 24 23:39:47.008221 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 51508 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:47.009751 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:47.013978 systemd-logind[1453]: New session 19 of user core. Apr 24 23:39:47.023281 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:39:47.164843 sshd[6298]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:47.169986 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:51508.service: Deactivated successfully. Apr 24 23:39:47.173470 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:39:47.174266 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:39:47.175158 systemd-logind[1453]: Removed session 19. Apr 24 23:39:47.809592 kubelet[2510]: E0424 23:39:47.809426 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:52.198388 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:36694.service - OpenSSH per-connection server daemon (10.0.0.1:36694). Apr 24 23:39:52.236643 sshd[6326]: Accepted publickey for core from 10.0.0.1 port 36694 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:52.237780 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.241243 systemd-logind[1453]: New session 20 of user core. Apr 24 23:39:52.245267 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:39:52.438677 sshd[6326]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:52.441930 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:36694.service: Deactivated successfully. Apr 24 23:39:52.443679 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:39:52.444266 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:39:52.445049 systemd-logind[1453]: Removed session 20. Apr 24 23:39:53.807709 kubelet[2510]: E0424 23:39:53.807641 2510 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:39:57.457343 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:36702.service - OpenSSH per-connection server daemon (10.0.0.1:36702). Apr 24 23:39:57.489628 sshd[6350]: Accepted publickey for core from 10.0.0.1 port 36702 ssh2: RSA SHA256:+pKTfkc0y+yBqDK+9JvbrBpZ4CVWpHNwHEurNBMeOGE Apr 24 23:39:57.491476 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:57.496498 systemd-logind[1453]: New session 21 of user core. Apr 24 23:39:57.506352 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:39:57.665654 sshd[6350]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:57.671570 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:36702.service: Deactivated successfully. Apr 24 23:39:57.674420 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:39:57.675327 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:39:57.676695 systemd-logind[1453]: Removed session 21.