Apr 21 10:30:15.849465 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:30:15.849496 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:30:15.849519 kernel: BIOS-provided physical RAM map: Apr 21 10:30:15.849531 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:30:15.849542 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 10:30:15.849587 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 10:30:15.849599 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 10:30:15.849610 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 10:30:15.849621 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 21 10:30:15.849632 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 21 10:30:15.849645 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 21 10:30:15.849657 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 21 10:30:15.849668 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 21 10:30:15.849686 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 21 10:30:15.849698 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 21 10:30:15.849710 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 10:30:15.849733 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 21 10:30:15.849744 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 21 10:30:15.849756 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 10:30:15.849767 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:30:15.849779 kernel: NX (Execute Disable) protection: active Apr 21 10:30:15.849790 kernel: APIC: Static calls initialized Apr 21 10:30:15.849811 kernel: efi: EFI v2.7 by EDK II Apr 21 10:30:15.849816 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 21 10:30:15.849820 kernel: SMBIOS 2.8 present. Apr 21 10:30:15.849825 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 21 10:30:15.849829 kernel: Hypervisor detected: KVM Apr 21 10:30:15.849842 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:30:15.849847 kernel: kvm-clock: using sched offset of 5025003023 cycles Apr 21 10:30:15.849851 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:30:15.849856 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:30:15.849861 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:30:15.849866 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:30:15.849871 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 21 10:30:15.849876 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:30:15.849881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:30:15.849887 kernel: Using GB pages for direct mapping Apr 21 10:30:15.849891 kernel: Secure boot disabled Apr 21 10:30:15.849896 kernel: ACPI: Early table checksum verification disabled Apr 21 10:30:15.849901 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 10:30:15.849908 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:30:15.849913 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:30:15.849918 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:30:15.849925 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 10:30:15.849930 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:30:15.849937 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:30:15.849945 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:30:15.849954 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:30:15.849962 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:30:15.849983 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 10:30:15.850003 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 10:30:15.850008 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 10:30:15.850013 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 10:30:15.850018 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 10:30:15.850023 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 10:30:15.850028 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 10:30:15.850033 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 10:30:15.850038 kernel: No NUMA configuration found Apr 21 10:30:15.850043 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 21 10:30:15.850049 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 21 10:30:15.850054 kernel: Zone ranges: Apr 21 10:30:15.850059 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:30:15.850064 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 21 10:30:15.850069 kernel: Normal empty Apr 21 10:30:15.850074 kernel: Movable zone start for each node Apr 21 10:30:15.850079 kernel: Early memory node ranges Apr 21 10:30:15.850084 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:30:15.850088 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 10:30:15.850093 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 10:30:15.850099 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 21 10:30:15.850104 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 21 10:30:15.850109 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 21 10:30:15.850114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 21 10:30:15.850118 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:30:15.850124 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:30:15.850128 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 10:30:15.850133 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:30:15.850138 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 21 10:30:15.850144 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:30:15.850149 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 21 10:30:15.850154 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:30:15.850159 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:30:15.850164 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:30:15.850169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:30:15.850174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:30:15.850178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:30:15.850183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:30:15.850188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:30:15.850194 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:30:15.850199 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:30:15.850204 kernel: TSC deadline timer available Apr 21 10:30:15.850209 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:30:15.850214 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:30:15.850219 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:30:15.850223 kernel: kvm-guest: setup PV sched yield Apr 21 10:30:15.850228 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 21 10:30:15.850233 kernel: Booting paravirtualized kernel on KVM Apr 21 10:30:15.850239 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:30:15.850245 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:30:15.850250 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:30:15.850255 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:30:15.850259 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:30:15.850264 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:30:15.850269 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:30:15.850275 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:30:15.850281 kernel: random: crng init done Apr 21 10:30:15.850286 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:30:15.850291 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:30:15.850296 kernel: Fallback order for Node 0: 0 Apr 21 10:30:15.850301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 21 10:30:15.850305 kernel: Policy zone: DMA32 Apr 21 10:30:15.850310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:30:15.850316 kernel: Memory: 2394672K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 172124K reserved, 0K cma-reserved) Apr 21 10:30:15.850323 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:30:15.850329 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:30:15.850334 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:30:15.850339 kernel: Dynamic Preempt: voluntary Apr 21 10:30:15.850344 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:30:15.850354 kernel: rcu: RCU event tracing is enabled. Apr 21 10:30:15.850361 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:30:15.850366 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:30:15.850372 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:30:15.850377 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:30:15.850383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:30:15.850388 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:30:15.850395 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:30:15.850400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:30:15.850406 kernel: Console: colour dummy device 80x25 Apr 21 10:30:15.850411 kernel: printk: console [ttyS0] enabled Apr 21 10:30:15.850417 kernel: ACPI: Core revision 20230628 Apr 21 10:30:15.850422 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:30:15.850429 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:30:15.850434 kernel: x2apic enabled Apr 21 10:30:15.850440 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:30:15.850445 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:30:15.850451 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:30:15.850456 kernel: kvm-guest: setup PV IPIs Apr 21 10:30:15.850461 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:30:15.850467 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:30:15.850472 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:30:15.850479 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:30:15.850484 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:30:15.850490 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:30:15.850495 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:30:15.850501 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:30:15.850507 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:30:15.850512 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:30:15.850518 kernel: RETBleed: Vulnerable Apr 21 10:30:15.850525 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:30:15.850530 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:30:15.850536 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:30:15.850541 kernel: active return thunk: its_return_thunk Apr 21 10:30:15.850571 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:30:15.850577 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:30:15.850583 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:30:15.850588 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:30:15.850594 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:30:15.850601 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:30:15.850606 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:30:15.850612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:30:15.850617 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:30:15.850622 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:30:15.850628 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:30:15.850633 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:30:15.850639 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:30:15.850644 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:30:15.850651 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:30:15.850656 kernel: landlock: Up and running. Apr 21 10:30:15.850662 kernel: SELinux: Initializing. Apr 21 10:30:15.850667 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:30:15.850672 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:30:15.850678 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:30:15.850684 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:30:15.850689 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:30:15.850695 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:30:15.850702 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:30:15.850707 kernel: signal: max sigframe size: 3632 Apr 21 10:30:15.850713 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:30:15.850718 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:30:15.850724 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:30:15.850729 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:30:15.850734 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:30:15.850740 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:30:15.850745 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:30:15.850752 kernel: smpboot: Max logical packages: 1 Apr 21 10:30:15.850757 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:30:15.850762 kernel: devtmpfs: initialized Apr 21 10:30:15.850768 kernel: x86/mm: Memory block size: 128MB Apr 21 10:30:15.850774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 10:30:15.850779 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 10:30:15.850784 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 21 10:30:15.850790 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 10:30:15.850808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 10:30:15.850815 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:30:15.850821 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:30:15.850826 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:30:15.850832 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:30:15.850837 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:30:15.850842 kernel: audit: type=2000 audit(1776767415.489:1): state=initialized audit_enabled=0 res=1 Apr 21 10:30:15.850848 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:30:15.850853 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:30:15.850859 kernel: cpuidle: using governor menu Apr 21 10:30:15.850865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:30:15.850871 kernel: dca service started, version 1.12.1 Apr 21 10:30:15.850876 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:30:15.850891 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:30:15.850896 kernel: PCI: Using configuration type 1 for base access Apr 21 10:30:15.850902 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:30:15.850916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:30:15.850921 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:30:15.850936 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:30:15.850968 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:30:15.850979 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:30:15.850988 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:30:15.850993 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:30:15.850999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:30:15.851004 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:30:15.851009 kernel: ACPI: Interpreter enabled Apr 21 10:30:15.851015 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:30:15.851029 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:30:15.851036 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:30:15.851042 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:30:15.851047 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:30:15.851053 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:30:15.851170 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:30:15.851234 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:30:15.851290 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:30:15.851299 kernel: PCI host bridge to bus 0000:00 Apr 21 10:30:15.851357 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:30:15.851409 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:30:15.851458 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:30:15.851508 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:30:15.851622 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:30:15.851747 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 21 10:30:15.851828 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:30:15.851896 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:30:15.851970 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:30:15.852041 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 21 10:30:15.852097 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 21 10:30:15.852151 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:30:15.852207 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:30:15.852265 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:30:15.852327 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:30:15.852384 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 21 10:30:15.852440 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 21 10:30:15.852495 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 21 10:30:15.852580 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:30:15.852642 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 21 10:30:15.852698 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 21 10:30:15.852753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 21 10:30:15.852834 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:30:15.852889 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 21 10:30:15.852951 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 21 10:30:15.853031 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 21 10:30:15.853115 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 21 10:30:15.853178 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:30:15.853233 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:30:15.853292 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:30:15.853348 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 21 10:30:15.853402 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 21 10:30:15.853461 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:30:15.853519 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 21 10:30:15.853526 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:30:15.853532 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:30:15.853537 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:30:15.853543 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:30:15.853618 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:30:15.853624 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:30:15.853629 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:30:15.853637 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:30:15.853642 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:30:15.853648 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:30:15.853653 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:30:15.853659 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:30:15.853664 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:30:15.853669 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:30:15.853675 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:30:15.853680 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:30:15.853687 kernel: iommu: Default domain type: Translated Apr 21 10:30:15.853693 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:30:15.853698 kernel: efivars: Registered efivars operations Apr 21 10:30:15.853703 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:30:15.853709 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:30:15.853714 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 10:30:15.853720 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 21 10:30:15.853725 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 21 10:30:15.853730 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 21 10:30:15.853791 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:30:15.853864 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:30:15.853918 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:30:15.853925 kernel: vgaarb: loaded Apr 21 10:30:15.853930 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:30:15.853936 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:30:15.853941 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:30:15.853947 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:30:15.853952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:30:15.853959 kernel: pnp: PnP ACPI init Apr 21 10:30:15.854018 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:30:15.854026 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:30:15.854032 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:30:15.854037 kernel: NET: Registered PF_INET protocol family Apr 21 10:30:15.854043 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:30:15.854048 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:30:15.854054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:30:15.854061 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:30:15.854066 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:30:15.854072 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:30:15.854078 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:30:15.854083 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:30:15.854089 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:30:15.854094 kernel: NET: Registered PF_XDP protocol family Apr 21 10:30:15.854149 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 21 10:30:15.854205 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 21 10:30:15.854258 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:30:15.854307 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:30:15.854355 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:30:15.854405 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:30:15.854453 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:30:15.854502 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 21 10:30:15.854509 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:30:15.854514 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:30:15.854521 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:30:15.854527 kernel: Initialise system trusted keyrings Apr 21 10:30:15.854532 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:30:15.854538 kernel: Key type asymmetric registered Apr 21 10:30:15.854543 kernel: Asymmetric key parser 'x509' registered Apr 21 10:30:15.854569 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:30:15.854574 kernel: io scheduler mq-deadline registered Apr 21 10:30:15.854580 kernel: io scheduler kyber registered Apr 21 10:30:15.854587 kernel: io scheduler bfq registered Apr 21 10:30:15.854593 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:30:15.854599 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:30:15.854604 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:30:15.854610 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:30:15.854615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:30:15.854621 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:30:15.854627 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:30:15.854633 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:30:15.854639 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:30:15.854699 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:30:15.854707 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:30:15.854758 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:30:15.854826 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:30:15 UTC (1776767415) Apr 21 10:30:15.854878 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 10:30:15.854885 kernel: intel_pstate: CPU model not supported Apr 21 10:30:15.854891 kernel: efifb: probing for efifb Apr 21 10:30:15.854898 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 21 10:30:15.854903 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 21 10:30:15.854909 kernel: efifb: scrolling: redraw Apr 21 10:30:15.854914 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 21 10:30:15.854920 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:30:15.854925 kernel: fb0: EFI VGA frame buffer device Apr 21 10:30:15.854941 kernel: pstore: Using crash dump compression: deflate Apr 21 10:30:15.854948 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:30:15.854953 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:30:15.854960 kernel: Segment Routing with IPv6 Apr 21 10:30:15.854965 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:30:15.854971 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:30:15.854976 kernel: Key type dns_resolver registered Apr 21 10:30:15.854982 kernel: IPI shorthand broadcast: enabled Apr 21 10:30:15.854987 kernel: sched_clock: Marking stable (696007746, 200233353)->(942748683, -46507584) Apr 21 10:30:15.854993 kernel: registered taskstats version 1 Apr 21 10:30:15.854999 kernel: Loading compiled-in X.509 certificates Apr 21 10:30:15.855004 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:30:15.855011 kernel: Key type .fscrypt registered Apr 21 10:30:15.855016 kernel: Key type fscrypt-provisioning registered Apr 21 10:30:15.855023 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:30:15.855028 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:30:15.855034 kernel: ima: No architecture policies found Apr 21 10:30:15.855039 kernel: clk: Disabling unused clocks Apr 21 10:30:15.855045 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:30:15.855051 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:30:15.855056 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:30:15.855062 kernel: Run /init as init process Apr 21 10:30:15.855068 kernel: with arguments: Apr 21 10:30:15.855074 kernel: /init Apr 21 10:30:15.855079 kernel: with environment: Apr 21 10:30:15.855085 kernel: HOME=/ Apr 21 10:30:15.855090 kernel: TERM=linux Apr 21 10:30:15.855098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:30:15.855106 systemd[1]: Detected virtualization kvm. Apr 21 10:30:15.855114 systemd[1]: Detected architecture x86-64. Apr 21 10:30:15.855121 systemd[1]: Running in initrd. Apr 21 10:30:15.855127 systemd[1]: No hostname configured, using default hostname. Apr 21 10:30:15.855133 systemd[1]: Hostname set to . Apr 21 10:30:15.855139 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:30:15.855146 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:30:15.855152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:30:15.855158 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:30:15.855165 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:30:15.855171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:30:15.855177 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:30:15.855183 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:30:15.855191 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:30:15.855198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:30:15.855204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:30:15.855210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:30:15.855216 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:30:15.855222 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:30:15.855228 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:30:15.855234 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:30:15.855241 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:30:15.855247 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:30:15.855253 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:30:15.855259 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:30:15.855265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:30:15.855272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:30:15.855278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:30:15.855284 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:30:15.855290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:30:15.855297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:30:15.855303 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:30:15.855309 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:30:15.855315 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:30:15.855321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:30:15.855327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:15.855333 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:30:15.855340 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:30:15.855347 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:30:15.855365 systemd-journald[194]: Collecting audit messages is disabled. Apr 21 10:30:15.855382 systemd-journald[194]: Journal started Apr 21 10:30:15.855397 systemd-journald[194]: Runtime Journal (/run/log/journal/87e643309bf14a9eb2a5b5702d7dc1a3) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:30:15.850012 systemd-modules-load[195]: Inserted module 'overlay' Apr 21 10:30:15.863947 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:30:15.867610 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:30:15.868228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:15.871236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:30:15.878582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:30:15.880738 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 21 10:30:15.881957 kernel: Bridge firewalling registered Apr 21 10:30:15.886712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:30:15.887861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:30:15.888446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:30:15.888635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:30:15.892668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:30:15.901950 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:30:15.909417 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:30:15.912476 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:30:15.916031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:30:15.928682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:30:15.932403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:30:15.940199 dracut-cmdline[231]: dracut-dracut-053 Apr 21 10:30:15.943063 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:30:15.952910 systemd-resolved[232]: Positive Trust Anchors: Apr 21 10:30:15.952931 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:30:15.952955 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:30:15.954741 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 21 10:30:15.955367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:30:15.956446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:30:16.021599 kernel: SCSI subsystem initialized Apr 21 10:30:16.028610 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:30:16.038593 kernel: iscsi: registered transport (tcp) Apr 21 10:30:16.056075 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:30:16.056110 kernel: QLogic iSCSI HBA Driver Apr 21 10:30:16.087360 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:30:16.105742 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:30:16.128697 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:30:16.128739 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:30:16.130782 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:30:16.169605 kernel: raid6: avx512x4 gen() 45341 MB/s Apr 21 10:30:16.186595 kernel: raid6: avx512x2 gen() 43330 MB/s Apr 21 10:30:16.203616 kernel: raid6: avx512x1 gen() 43441 MB/s Apr 21 10:30:16.220601 kernel: raid6: avx2x4 gen() 37694 MB/s Apr 21 10:30:16.237612 kernel: raid6: avx2x2 gen() 37955 MB/s Apr 21 10:30:16.255194 kernel: raid6: avx2x1 gen() 29181 MB/s Apr 21 10:30:16.255221 kernel: raid6: using algorithm avx512x4 gen() 45341 MB/s Apr 21 10:30:16.273163 kernel: raid6: .... xor() 9919 MB/s, rmw enabled Apr 21 10:30:16.273211 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:30:16.291604 kernel: xor: automatically using best checksumming function avx Apr 21 10:30:16.418604 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:30:16.427699 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:30:16.441764 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:30:16.454184 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 21 10:30:16.458176 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:30:16.464778 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:30:16.475066 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 21 10:30:16.499189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:30:16.504773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:30:16.532844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:30:16.539696 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:30:16.549212 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:30:16.550462 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:30:16.552516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:30:16.559563 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:30:16.559581 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:30:16.561646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:30:16.574340 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:30:16.574452 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:30:16.574462 kernel: GPT:9289727 != 19775487 Apr 21 10:30:16.574468 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:30:16.574475 kernel: GPT:9289727 != 19775487 Apr 21 10:30:16.574482 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:30:16.574489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:30:16.572728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:30:16.581615 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:30:16.583917 kernel: AES CTR mode by8 optimization enabled Apr 21 10:30:16.589908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:30:16.597570 kernel: libata version 3.00 loaded. Apr 21 10:30:16.600073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:30:16.614767 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:30:16.614917 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:30:16.614927 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:30:16.615000 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:30:16.615068 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Apr 21 10:30:16.615076 kernel: scsi host0: ahci Apr 21 10:30:16.600173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:30:16.619899 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (474) Apr 21 10:30:16.604500 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:30:16.605083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:30:16.605202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:16.605356 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:16.615906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:16.622567 kernel: scsi host1: ahci Apr 21 10:30:16.625397 kernel: scsi host2: ahci Apr 21 10:30:16.625545 kernel: scsi host3: ahci Apr 21 10:30:16.626582 kernel: scsi host4: ahci Apr 21 10:30:16.634590 kernel: scsi host5: ahci Apr 21 10:30:16.634754 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 21 10:30:16.634776 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 21 10:30:16.634790 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 21 10:30:16.634843 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 21 10:30:16.634859 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 21 10:30:16.634871 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 21 10:30:16.633921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:30:16.652198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:30:16.655731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:30:16.656285 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:30:16.665933 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:30:16.681714 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:30:16.682419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:30:16.682463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:16.685575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:16.689589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:16.698454 disk-uuid[560]: Primary Header is updated. Apr 21 10:30:16.698454 disk-uuid[560]: Secondary Entries is updated. Apr 21 10:30:16.698454 disk-uuid[560]: Secondary Header is updated. Apr 21 10:30:16.703946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:30:16.703969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:30:16.705982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:16.710203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:30:16.713764 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:30:16.734889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:30:16.947473 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:30:16.947567 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:30:16.949607 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:30:16.949620 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:30:16.950595 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:30:16.952380 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:30:16.952389 kernel: ata3.00: applying bridge limits Apr 21 10:30:16.953583 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:30:16.955603 kernel: ata3.00: configured for UDMA/100 Apr 21 10:30:16.957592 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:30:17.014694 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:30:17.014903 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:30:17.027659 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:30:17.709599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:30:17.709966 disk-uuid[563]: The operation has completed successfully. Apr 21 10:30:17.729205 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:30:17.729293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:30:17.744703 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:30:17.750487 sh[599]: Success Apr 21 10:30:17.761592 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:30:17.786903 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:30:17.801705 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:30:17.804275 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:30:17.814526 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:30:17.814572 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:30:17.814584 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:30:17.815938 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:30:17.816974 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:30:17.821789 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:30:17.824438 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:30:17.833679 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:30:17.835671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:30:17.845049 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:30:17.845081 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:30:17.845090 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:30:17.849682 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:30:17.854904 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:30:17.857446 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:30:17.862534 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:30:17.870721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:30:17.911640 ignition[695]: Ignition 2.19.0 Apr 21 10:30:17.911654 ignition[695]: Stage: fetch-offline Apr 21 10:30:17.911680 ignition[695]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:30:17.911687 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:30:17.911743 ignition[695]: parsed url from cmdline: "" Apr 21 10:30:17.911745 ignition[695]: no config URL provided Apr 21 10:30:17.911748 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:30:17.911754 ignition[695]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:30:17.911769 ignition[695]: op(1): [started] loading QEMU firmware config module Apr 21 10:30:17.911773 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:30:17.917508 ignition[695]: op(1): [finished] loading QEMU firmware config module Apr 21 10:30:17.927183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:30:17.937702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:30:17.952227 systemd-networkd[788]: lo: Link UP Apr 21 10:30:17.952246 systemd-networkd[788]: lo: Gained carrier Apr 21 10:30:17.953049 systemd-networkd[788]: Enumeration completed Apr 21 10:30:17.953485 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:30:17.953487 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:30:17.953609 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:30:17.955331 systemd[1]: Reached target network.target - Network. Apr 21 10:30:17.956316 systemd-networkd[788]: eth0: Link UP Apr 21 10:30:17.956318 systemd-networkd[788]: eth0: Gained carrier Apr 21 10:30:17.956323 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:30:17.995615 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:30:18.013448 ignition[695]: parsing config with SHA512: fab1390c1174654845853b03991bb11b23709df16f3a5ccfdd45f885c540887527d404c4cc61c11d536de9d30db226ae9e43f07abc34f2232a0811959f4b1707 Apr 21 10:30:18.016924 unknown[695]: fetched base config from "system" Apr 21 10:30:18.016930 unknown[695]: fetched user config from "qemu" Apr 21 10:30:18.017222 ignition[695]: fetch-offline: fetch-offline passed Apr 21 10:30:18.019333 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:30:18.017264 ignition[695]: Ignition finished successfully Apr 21 10:30:18.021792 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:30:18.033690 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:30:18.046330 ignition[792]: Ignition 2.19.0 Apr 21 10:30:18.046342 ignition[792]: Stage: kargs Apr 21 10:30:18.046464 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:30:18.046471 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:30:18.047123 ignition[792]: kargs: kargs passed Apr 21 10:30:18.047150 ignition[792]: Ignition finished successfully Apr 21 10:30:18.050849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:30:18.063753 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:30:18.072985 ignition[800]: Ignition 2.19.0 Apr 21 10:30:18.072999 ignition[800]: Stage: disks Apr 21 10:30:18.073116 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:30:18.073122 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:30:18.073730 ignition[800]: disks: disks passed Apr 21 10:30:18.076008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:30:18.073757 ignition[800]: Ignition finished successfully Apr 21 10:30:18.078884 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:30:18.080893 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:30:18.083722 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:30:18.085082 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:30:18.086456 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:30:18.105744 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:30:18.116301 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:30:18.120472 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:30:18.143631 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:30:18.215588 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:30:18.216038 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:30:18.217272 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:30:18.231865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:30:18.234961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:30:18.236049 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:30:18.242687 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Apr 21 10:30:18.236075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:30:18.250099 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:30:18.250115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:30:18.250124 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:30:18.250137 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:30:18.236090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:30:18.251494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:30:18.258936 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:30:18.261379 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:30:18.292465 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:30:18.296736 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:30:18.300134 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:30:18.303951 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:30:18.364754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:30:18.379686 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:30:18.380799 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:30:18.390588 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:30:18.400949 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:30:18.411979 ignition[933]: INFO : Ignition 2.19.0 Apr 21 10:30:18.411979 ignition[933]: INFO : Stage: mount Apr 21 10:30:18.413966 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:30:18.413966 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:30:18.413966 ignition[933]: INFO : mount: mount passed Apr 21 10:30:18.413966 ignition[933]: INFO : Ignition finished successfully Apr 21 10:30:18.415471 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:30:18.428642 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:30:18.813078 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:30:18.821777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:30:18.831407 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Apr 21 10:30:18.831432 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:30:18.831443 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:30:18.833565 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:30:18.836585 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:30:18.837118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:30:18.854515 ignition[963]: INFO : Ignition 2.19.0 Apr 21 10:30:18.854515 ignition[963]: INFO : Stage: files Apr 21 10:30:18.857053 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:30:18.857053 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:30:18.857053 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:30:18.857053 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:30:18.857053 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:30:18.865494 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:30:18.865494 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:30:18.865494 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:30:18.865494 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:30:18.865494 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:30:18.865494 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:30:18.865494 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:30:18.858181 unknown[963]: wrote ssh authorized keys file for user: core Apr 21 10:30:18.912068 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:30:19.019662 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:30:19.019662 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:30:19.024780 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:30:19.298935 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:30:19.379690 systemd-networkd[788]: eth0: Gained IPv6LL Apr 21 10:30:19.517365 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:30:19.517365 ignition[963]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 21 10:30:19.522221 ignition[963]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:30:19.539392 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:30:19.555861 ignition[963]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:30:19.555861 ignition[963]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:30:19.555861 ignition[963]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:30:19.555861 ignition[963]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:30:19.555861 ignition[963]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:30:19.555861 ignition[963]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:30:19.555861 ignition[963]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:30:19.555861 ignition[963]: INFO : files: files passed Apr 21 10:30:19.555861 ignition[963]: INFO : Ignition finished successfully Apr 21 10:30:19.550747 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:30:19.553525 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:30:19.555983 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:30:19.583466 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:30:19.556054 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:30:19.586866 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:30:19.586866 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:30:19.564363 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:30:19.594391 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:30:19.567188 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:30:19.570920 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:30:19.589071 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:30:19.589156 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:30:19.592984 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:30:19.595750 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:30:19.597209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:30:19.597764 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:30:19.609670 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:30:19.620684 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:30:19.629361 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:30:19.630045 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:30:19.633105 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:30:19.636014 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:30:19.636096 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:30:19.640539 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:30:19.641262 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:30:19.644978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:30:19.647226 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:30:19.649935 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:30:19.653278 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:30:19.655973 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:30:19.658272 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:30:19.661328 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:30:19.664051 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:30:19.666270 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:30:19.666348 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:30:19.670338 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:30:19.672969 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:30:19.675835 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:30:19.676144 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:30:19.678853 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:30:19.678933 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:30:19.684197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:30:19.684301 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:30:19.687017 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:30:19.689330 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:30:19.695717 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:30:19.696372 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:30:19.700288 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:30:19.702926 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:30:19.703012 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:30:19.705029 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:30:19.705092 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:30:19.707272 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:30:19.707358 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:30:19.709888 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:30:19.709962 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:30:19.724772 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:30:19.726375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:30:19.726483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:30:19.731640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:30:19.732598 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:30:19.732693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:30:19.735772 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:30:19.735872 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:30:19.746032 ignition[1017]: INFO : Ignition 2.19.0 Apr 21 10:30:19.746032 ignition[1017]: INFO : Stage: umount Apr 21 10:30:19.746032 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:30:19.746032 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:30:19.740520 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:30:19.755673 ignition[1017]: INFO : umount: umount passed Apr 21 10:30:19.755673 ignition[1017]: INFO : Ignition finished successfully Apr 21 10:30:19.740642 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:30:19.747731 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:30:19.747804 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:30:19.752626 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:30:19.753276 systemd[1]: Stopped target network.target - Network. Apr 21 10:30:19.753959 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:30:19.754001 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:30:19.756197 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:30:19.756229 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:30:19.762267 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:30:19.764793 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:30:19.765590 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:30:19.765626 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:30:19.769110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:30:19.771386 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:30:19.782149 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:30:19.782269 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:30:19.785713 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:30:19.785748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:30:19.798626 systemd-networkd[788]: eth0: DHCPv6 lease lost Apr 21 10:30:19.800369 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:30:19.800487 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:30:19.803276 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:30:19.803347 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:30:19.806044 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:30:19.806079 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:30:19.807178 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:30:19.807209 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:30:19.823695 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:30:19.826525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:30:19.828530 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:30:19.829904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:30:19.829941 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:30:19.833031 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:30:19.833064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:30:19.835521 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:30:19.844003 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:30:19.845331 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:30:19.859134 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:30:19.859281 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:30:19.862531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:30:19.862667 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:30:19.865458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:30:19.865485 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:30:19.868186 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:30:19.868222 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:30:19.871241 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:30:19.871272 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:30:19.873647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:30:19.873676 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:30:19.887727 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:30:19.888272 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:30:19.888315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:30:19.891131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:30:19.891161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:19.894190 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:30:19.894263 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:30:19.899269 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:30:19.902414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:30:19.911133 systemd[1]: Switching root. Apr 21 10:30:19.943390 systemd-journald[194]: Journal stopped Apr 21 10:30:20.580684 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 21 10:30:20.580735 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:30:20.580747 kernel: SELinux: policy capability open_perms=1 Apr 21 10:30:20.580758 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:30:20.580765 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:30:20.580773 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:30:20.580786 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:30:20.580796 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:30:20.580803 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:30:20.580811 kernel: audit: type=1403 audit(1776767420.076:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:30:20.580837 systemd[1]: Successfully loaded SELinux policy in 28.338ms. Apr 21 10:30:20.580854 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.377ms. Apr 21 10:30:20.580863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:30:20.580871 systemd[1]: Detected virtualization kvm. Apr 21 10:30:20.580881 systemd[1]: Detected architecture x86-64. Apr 21 10:30:20.580889 systemd[1]: Detected first boot. Apr 21 10:30:20.580898 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:30:20.580906 zram_generator::config[1079]: No configuration found. Apr 21 10:30:20.580917 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:30:20.580925 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:30:20.580932 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:30:20.580941 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:30:20.580949 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:30:20.580956 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:30:20.580967 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:30:20.580975 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:30:20.580984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:30:20.580992 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:30:20.580999 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:30:20.581007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:30:20.581015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:30:20.581022 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:30:20.581030 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:30:20.581039 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:30:20.581047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:30:20.581056 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:30:20.581064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:30:20.581072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:30:20.581079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:30:20.581087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:30:20.581095 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:30:20.581104 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:30:20.581111 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:30:20.581119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:30:20.581127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:30:20.581135 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:30:20.581142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:30:20.581150 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:30:20.581158 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:30:20.581166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:30:20.581173 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:30:20.581183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:30:20.581190 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:30:20.581198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:20.581206 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:30:20.581214 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:30:20.581222 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:30:20.581229 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:30:20.581237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:30:20.581247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:30:20.581254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:30:20.581265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:30:20.581272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:30:20.581280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:30:20.581287 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:30:20.581295 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:30:20.581302 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:30:20.581312 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 10:30:20.581320 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 10:30:20.581327 kernel: fuse: init (API version 7.39) Apr 21 10:30:20.581334 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:30:20.581342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:30:20.581350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:30:20.581357 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:30:20.581364 kernel: ACPI: bus type drm_connector registered Apr 21 10:30:20.581372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:30:20.581392 systemd-journald[1167]: Collecting audit messages is disabled. Apr 21 10:30:20.581410 systemd-journald[1167]: Journal started Apr 21 10:30:20.581427 systemd-journald[1167]: Runtime Journal (/run/log/journal/87e643309bf14a9eb2a5b5702d7dc1a3) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:30:20.585536 kernel: loop: module loaded Apr 21 10:30:20.585599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:20.589476 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:30:20.590419 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:30:20.591904 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:30:20.593420 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:30:20.594793 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:30:20.596301 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:30:20.597846 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:30:20.599310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:30:20.601055 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:30:20.602875 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:30:20.602989 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:30:20.604714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:30:20.604840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:30:20.606492 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:30:20.606630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:30:20.608197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:30:20.608308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:30:20.610067 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:30:20.610177 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:30:20.611782 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:30:20.611922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:30:20.613618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:30:20.615358 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:30:20.617271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:30:20.625328 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:30:20.627392 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:30:20.634637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:30:20.636877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:30:20.638344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:30:20.639231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:30:20.641305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:30:20.642864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:30:20.643667 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:30:20.645230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:30:20.647480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:30:20.651485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:30:20.658335 systemd-journald[1167]: Time spent on flushing to /var/log/journal/87e643309bf14a9eb2a5b5702d7dc1a3 is 16.950ms for 986 entries. Apr 21 10:30:20.658335 systemd-journald[1167]: System Journal (/var/log/journal/87e643309bf14a9eb2a5b5702d7dc1a3) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:30:20.685336 systemd-journald[1167]: Received client request to flush runtime journal. Apr 21 10:30:20.664748 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:30:20.667413 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:30:20.669875 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:30:20.671709 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:30:20.673686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:30:20.679373 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:30:20.680297 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:30:20.686053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:30:20.689440 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Apr 21 10:30:20.689459 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Apr 21 10:30:20.692796 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:30:20.701725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:30:20.717654 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:30:20.720316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:30:20.733041 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Apr 21 10:30:20.733061 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Apr 21 10:30:20.735866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:30:20.967861 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:30:20.980806 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:30:20.997390 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Apr 21 10:30:21.010988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:30:21.018790 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:30:21.027686 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:30:21.033438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1258) Apr 21 10:30:21.046181 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 10:30:21.061972 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:30:21.063794 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:30:21.074585 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:30:21.080592 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:30:21.097624 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:30:21.105602 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:30:21.105814 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:30:21.109804 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:30:21.109981 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:30:21.111998 systemd-networkd[1253]: lo: Link UP Apr 21 10:30:21.112001 systemd-networkd[1253]: lo: Gained carrier Apr 21 10:30:21.112793 systemd-networkd[1253]: Enumeration completed Apr 21 10:30:21.113277 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:30:21.113282 systemd-networkd[1253]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:30:21.114134 systemd-networkd[1253]: eth0: Link UP Apr 21 10:30:21.114166 systemd-networkd[1253]: eth0: Gained carrier Apr 21 10:30:21.114194 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:30:21.118006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:21.118677 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:30:21.122798 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:30:21.126606 systemd-networkd[1253]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:30:21.132702 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:30:21.140193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:30:21.140363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:21.200875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:30:21.240205 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:30:21.252698 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:30:21.254897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:30:21.261786 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:30:21.292899 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:30:21.294888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:30:21.305661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:30:21.309985 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:30:21.331419 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:30:21.333898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:30:21.335647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:30:21.335673 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:30:21.337095 systemd[1]: Reached target machines.target - Containers. Apr 21 10:30:21.339103 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:30:21.353674 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:30:21.356457 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:30:21.358039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:30:21.358699 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:30:21.360517 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:30:21.363135 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:30:21.365268 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:30:21.369976 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:30:21.378571 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:30:21.381266 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:30:21.381717 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:30:21.394581 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:30:21.413585 kernel: loop1: detected capacity change from 0 to 228704 Apr 21 10:30:21.435588 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:30:21.464844 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:30:21.476574 kernel: loop4: detected capacity change from 0 to 228704 Apr 21 10:30:21.482572 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:30:21.489669 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:30:21.489991 (sd-merge)[1319]: Merged extensions into '/usr'. Apr 21 10:30:21.492815 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:30:21.492843 systemd[1]: Reloading... Apr 21 10:30:21.521770 zram_generator::config[1345]: No configuration found. Apr 21 10:30:21.545285 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:30:21.605922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:30:21.642069 systemd[1]: Reloading finished in 148 ms. Apr 21 10:30:21.656501 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:30:21.658434 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:30:21.669304 systemd[1]: Starting ensure-sysext.service... Apr 21 10:30:21.671308 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:30:21.674225 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:30:21.674247 systemd[1]: Reloading... Apr 21 10:30:21.685921 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:30:21.686124 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:30:21.686627 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:30:21.686793 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Apr 21 10:30:21.686862 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Apr 21 10:30:21.688521 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:30:21.688536 systemd-tmpfiles[1392]: Skipping /boot Apr 21 10:30:21.693379 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:30:21.693852 systemd-tmpfiles[1392]: Skipping /boot Apr 21 10:30:21.707590 zram_generator::config[1417]: No configuration found. Apr 21 10:30:21.783952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:30:21.820546 systemd[1]: Reloading finished in 146 ms. Apr 21 10:30:21.838617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:30:21.855203 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:30:21.858629 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:30:21.861670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:30:21.865867 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:30:21.869864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:30:21.875311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:21.875584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:30:21.876659 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:30:21.880794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:30:21.885106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:30:21.886885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:30:21.886966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:21.887437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:30:21.887537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:30:21.891934 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:30:21.894219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:30:21.894313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:30:21.896612 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:30:21.896781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:30:21.902944 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:30:21.906774 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:21.906933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:30:21.911538 augenrules[1501]: No rules Apr 21 10:30:21.913752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:30:21.916864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:30:21.919877 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:30:21.921953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:30:21.923717 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:30:21.927792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:21.928775 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:30:21.929749 systemd-resolved[1475]: Positive Trust Anchors: Apr 21 10:30:21.929975 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:30:21.930018 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:30:21.931055 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:30:21.934255 systemd-resolved[1475]: Defaulting to hostname 'linux'. Apr 21 10:30:21.953913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:30:21.954053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:30:21.956305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:30:21.958124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:30:21.958241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:30:21.960131 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:30:21.960242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:30:21.962153 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:30:21.971755 systemd[1]: Reached target network.target - Network. Apr 21 10:30:21.973225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:30:21.975109 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:21.975271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:30:21.988800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:30:21.991456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:30:21.993573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:30:21.995800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:30:21.996518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:30:21.996720 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:30:21.996817 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:30:21.997497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:30:21.997773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:30:21.999738 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:30:21.999869 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:30:22.001753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:30:22.001898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:30:22.018998 systemd[1]: Finished ensure-sysext.service. Apr 21 10:30:22.020738 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:30:22.020896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:30:22.024766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:30:22.024857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:30:22.026111 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:30:22.064517 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:30:22.066800 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:30:22.850575 systemd-timesyncd[1540]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:30:22.850614 systemd-timesyncd[1540]: Initial clock synchronization to Tue 2026-04-21 10:30:22.850402 UTC. Apr 21 10:30:22.851558 systemd-resolved[1475]: Clock change detected. Flushing caches. Apr 21 10:30:22.851629 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:30:22.853331 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:30:22.854972 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:30:22.856663 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:30:22.856692 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:30:22.857866 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:30:22.859304 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:30:22.860768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:30:22.862419 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:30:22.864108 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:30:22.866761 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:30:22.868833 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:30:22.877414 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:30:22.879104 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:30:22.880594 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:30:22.881933 systemd[1]: System is tainted: cgroupsv1 Apr 21 10:30:22.881972 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:30:22.881985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:30:22.882776 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:30:22.884851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:30:22.886674 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:30:22.889865 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:30:22.891338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:30:22.892420 jq[1546]: false Apr 21 10:30:22.892631 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:30:22.896328 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:30:22.899868 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:30:22.901987 extend-filesystems[1548]: Found loop3 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found loop4 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found loop5 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found sr0 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda1 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda2 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda3 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found usr Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda4 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda6 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda7 Apr 21 10:30:22.901987 extend-filesystems[1548]: Found vda9 Apr 21 10:30:22.901987 extend-filesystems[1548]: Checking size of /dev/vda9 Apr 21 10:30:22.968401 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:30:22.968445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1259) Apr 21 10:30:22.968456 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:30:22.904084 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:30:22.911437 dbus-daemon[1545]: [system] SELinux support is enabled Apr 21 10:30:22.968774 extend-filesystems[1548]: Resized partition /dev/vda9 Apr 21 10:30:22.913864 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:30:22.972721 extend-filesystems[1568]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:30:22.972721 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:30:22.972721 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:30:22.972721 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:30:22.915639 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:30:22.980335 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Apr 21 10:30:22.920547 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:30:22.925515 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:30:22.983624 update_engine[1571]: I20260421 10:30:22.956712 1571 main.cc:92] Flatcar Update Engine starting Apr 21 10:30:22.983624 update_engine[1571]: I20260421 10:30:22.963979 1571 update_check_scheduler.cc:74] Next update check in 7m44s Apr 21 10:30:22.927684 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:30:22.986373 jq[1573]: true Apr 21 10:30:22.933970 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:30:22.934171 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:30:22.934352 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:30:22.986624 jq[1578]: true Apr 21 10:30:22.934488 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:30:22.938184 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:30:22.938335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:30:22.959305 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:30:22.964189 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:30:22.990073 tar[1577]: linux-amd64/LICENSE Apr 21 10:30:22.990073 tar[1577]: linux-amd64/helm Apr 21 10:30:22.966331 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:30:22.966348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:30:22.968402 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:30:22.968414 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:30:22.970822 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:30:22.981709 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:30:22.983922 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:30:22.984100 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:30:22.991101 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:30:22.991159 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:30:22.991386 systemd-logind[1562]: New seat seat0. Apr 21 10:30:22.992221 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:30:22.995718 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:30:22.996507 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:30:22.999819 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:30:23.016463 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:30:23.041853 systemd-networkd[1253]: eth0: Gained IPv6LL Apr 21 10:30:23.045974 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:30:23.048127 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:30:23.056971 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:30:23.061935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:30:23.064208 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:30:23.084968 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:30:23.085153 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:30:23.087086 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:30:23.091069 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:30:23.140572 containerd[1584]: time="2026-04-21T10:30:23.140484957Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:30:23.159402 containerd[1584]: time="2026-04-21T10:30:23.159340074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.160772 containerd[1584]: time="2026-04-21T10:30:23.160730910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.160832639Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.160847440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.160944655Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.160959992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.160996301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161005152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161230172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161247547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161256805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161263350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161312617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161727 containerd[1584]: time="2026-04-21T10:30:23.161453849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161985 containerd[1584]: time="2026-04-21T10:30:23.161542895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:30:23.161985 containerd[1584]: time="2026-04-21T10:30:23.161551479Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:30:23.161985 containerd[1584]: time="2026-04-21T10:30:23.161598830Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:30:23.161985 containerd[1584]: time="2026-04-21T10:30:23.161627244Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:30:23.166227 containerd[1584]: time="2026-04-21T10:30:23.166211418Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:30:23.166292 containerd[1584]: time="2026-04-21T10:30:23.166283890Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:30:23.166365 containerd[1584]: time="2026-04-21T10:30:23.166358477Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:30:23.166400 containerd[1584]: time="2026-04-21T10:30:23.166393930Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:30:23.166429 containerd[1584]: time="2026-04-21T10:30:23.166424138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:30:23.166597 containerd[1584]: time="2026-04-21T10:30:23.166586359Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:30:23.166871 containerd[1584]: time="2026-04-21T10:30:23.166861363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:30:23.166988 containerd[1584]: time="2026-04-21T10:30:23.166978337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:30:23.167062 containerd[1584]: time="2026-04-21T10:30:23.167052149Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:30:23.167096 containerd[1584]: time="2026-04-21T10:30:23.167089778Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:30:23.167125 containerd[1584]: time="2026-04-21T10:30:23.167119525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167164 containerd[1584]: time="2026-04-21T10:30:23.167157675Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167191 containerd[1584]: time="2026-04-21T10:30:23.167186099Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167219 containerd[1584]: time="2026-04-21T10:30:23.167213316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167247 containerd[1584]: time="2026-04-21T10:30:23.167241248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167273 containerd[1584]: time="2026-04-21T10:30:23.167268145Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167300 containerd[1584]: time="2026-04-21T10:30:23.167294299Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167327 containerd[1584]: time="2026-04-21T10:30:23.167321527Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:30:23.167365 containerd[1584]: time="2026-04-21T10:30:23.167358245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167426 containerd[1584]: time="2026-04-21T10:30:23.167419243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167463 containerd[1584]: time="2026-04-21T10:30:23.167456255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167515 containerd[1584]: time="2026-04-21T10:30:23.167507431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167549 containerd[1584]: time="2026-04-21T10:30:23.167542848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167581 containerd[1584]: time="2026-04-21T10:30:23.167573091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167607 containerd[1584]: time="2026-04-21T10:30:23.167602118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167634 containerd[1584]: time="2026-04-21T10:30:23.167628711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167664 containerd[1584]: time="2026-04-21T10:30:23.167658676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167695 containerd[1584]: time="2026-04-21T10:30:23.167689937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167729 containerd[1584]: time="2026-04-21T10:30:23.167722788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167792 containerd[1584]: time="2026-04-21T10:30:23.167785213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167827 containerd[1584]: time="2026-04-21T10:30:23.167820666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167857 containerd[1584]: time="2026-04-21T10:30:23.167851408Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:30:23.167894 containerd[1584]: time="2026-04-21T10:30:23.167888166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167921 containerd[1584]: time="2026-04-21T10:30:23.167915949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.167951 containerd[1584]: time="2026-04-21T10:30:23.167945779Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:30:23.168040 containerd[1584]: time="2026-04-21T10:30:23.168028083Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:30:23.168185 containerd[1584]: time="2026-04-21T10:30:23.168174086Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:30:23.168217 containerd[1584]: time="2026-04-21T10:30:23.168211900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:30:23.168315 containerd[1584]: time="2026-04-21T10:30:23.168247368Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:30:23.168315 containerd[1584]: time="2026-04-21T10:30:23.168257199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.169353 containerd[1584]: time="2026-04-21T10:30:23.168350543Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:30:23.169353 containerd[1584]: time="2026-04-21T10:30:23.168361396Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:30:23.169353 containerd[1584]: time="2026-04-21T10:30:23.168370149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:30:23.169437 containerd[1584]: time="2026-04-21T10:30:23.168606570Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:30:23.169437 containerd[1584]: time="2026-04-21T10:30:23.168646413Z" level=info msg="Connect containerd service" Apr 21 10:30:23.169437 containerd[1584]: time="2026-04-21T10:30:23.168672315Z" level=info msg="using legacy CRI server" Apr 21 10:30:23.169437 containerd[1584]: time="2026-04-21T10:30:23.168677256Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:30:23.169437 containerd[1584]: time="2026-04-21T10:30:23.168769213Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:30:23.169437 containerd[1584]: time="2026-04-21T10:30:23.169182455Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:30:23.169678 containerd[1584]: time="2026-04-21T10:30:23.169655757Z" level=info msg="Start subscribing containerd event" Apr 21 10:30:23.169722 containerd[1584]: time="2026-04-21T10:30:23.169715711Z" level=info msg="Start recovering state" Apr 21 10:30:23.169838 containerd[1584]: time="2026-04-21T10:30:23.169830007Z" level=info msg="Start event monitor" Apr 21 10:30:23.169872 containerd[1584]: time="2026-04-21T10:30:23.169867222Z" level=info msg="Start snapshots syncer" Apr 21 10:30:23.169897 containerd[1584]: time="2026-04-21T10:30:23.169891963Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:30:23.169920 containerd[1584]: time="2026-04-21T10:30:23.169915376Z" level=info msg="Start streaming server" Apr 21 10:30:23.170257 containerd[1584]: time="2026-04-21T10:30:23.170245924Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:30:23.170339 containerd[1584]: time="2026-04-21T10:30:23.170330704Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:30:23.171793 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:30:23.173602 containerd[1584]: time="2026-04-21T10:30:23.173587306Z" level=info msg="containerd successfully booted in 0.033758s" Apr 21 10:30:23.278009 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:30:23.296680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:30:23.308039 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:30:23.313391 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:30:23.313552 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:30:23.325962 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:30:23.333285 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:30:23.336280 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:30:23.341817 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:30:23.343821 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:30:23.396664 tar[1577]: linux-amd64/README.md Apr 21 10:30:23.408959 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:30:23.675446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:23.677368 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:30:23.679801 systemd[1]: Startup finished in 5.209s (kernel) + 2.847s (userspace) = 8.057s. Apr 21 10:30:23.681428 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:30:24.071237 kubelet[1682]: E0421 10:30:24.071135 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:30:24.073132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:30:24.073289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:30:29.518290 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:30:29.533999 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Apr 21 10:30:29.567954 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:29.569520 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:29.576490 systemd-logind[1562]: New session 1 of user core. Apr 21 10:30:29.577152 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:30:29.586994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:30:29.596459 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:30:29.598280 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:30:29.603631 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:30:29.667393 systemd[1701]: Queued start job for default target default.target. Apr 21 10:30:29.667673 systemd[1701]: Created slice app.slice - User Application Slice. Apr 21 10:30:29.667687 systemd[1701]: Reached target paths.target - Paths. Apr 21 10:30:29.667696 systemd[1701]: Reached target timers.target - Timers. Apr 21 10:30:29.675851 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:30:29.680828 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:30:29.680884 systemd[1701]: Reached target sockets.target - Sockets. Apr 21 10:30:29.680893 systemd[1701]: Reached target basic.target - Basic System. Apr 21 10:30:29.680918 systemd[1701]: Reached target default.target - Main User Target. Apr 21 10:30:29.680935 systemd[1701]: Startup finished in 72ms. Apr 21 10:30:29.681279 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:30:29.682398 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:30:29.735099 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:35764.service - OpenSSH per-connection server daemon (10.0.0.1:35764). Apr 21 10:30:29.767000 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 35764 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:29.768084 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:29.771315 systemd-logind[1562]: New session 2 of user core. Apr 21 10:30:29.779967 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:30:29.830405 sshd[1713]: pam_unix(sshd:session): session closed for user core Apr 21 10:30:29.837952 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:35770.service - OpenSSH per-connection server daemon (10.0.0.1:35770). Apr 21 10:30:29.838353 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:35764.service: Deactivated successfully. Apr 21 10:30:29.839562 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:30:29.840468 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:30:29.841310 systemd-logind[1562]: Removed session 2. Apr 21 10:30:29.867422 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 35770 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:29.868406 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:29.871485 systemd-logind[1562]: New session 3 of user core. Apr 21 10:30:29.887954 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:30:29.934950 sshd[1719]: pam_unix(sshd:session): session closed for user core Apr 21 10:30:29.946963 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:35780.service - OpenSSH per-connection server daemon (10.0.0.1:35780). Apr 21 10:30:29.947304 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:35770.service: Deactivated successfully. Apr 21 10:30:29.949033 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:30:29.949444 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:30:29.950119 systemd-logind[1562]: Removed session 3. Apr 21 10:30:29.977555 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 35780 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:29.978497 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:29.981788 systemd-logind[1562]: New session 4 of user core. Apr 21 10:30:29.987948 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:30:30.038155 sshd[1726]: pam_unix(sshd:session): session closed for user core Apr 21 10:30:30.049954 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:35784.service - OpenSSH per-connection server daemon (10.0.0.1:35784). Apr 21 10:30:30.050405 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:35780.service: Deactivated successfully. Apr 21 10:30:30.051648 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:30:30.052258 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:30:30.053236 systemd-logind[1562]: Removed session 4. Apr 21 10:30:30.079668 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 35784 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:30.080923 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:30.084190 systemd-logind[1562]: New session 5 of user core. Apr 21 10:30:30.093955 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:30:30.147392 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:30:30.147601 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:30:30.160672 sudo[1741]: pam_unix(sudo:session): session closed for user root Apr 21 10:30:30.162364 sshd[1735]: pam_unix(sshd:session): session closed for user core Apr 21 10:30:30.172984 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:35798.service - OpenSSH per-connection server daemon (10.0.0.1:35798). Apr 21 10:30:30.173398 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:35784.service: Deactivated successfully. Apr 21 10:30:30.174598 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:30:30.175104 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:30:30.176092 systemd-logind[1562]: Removed session 5. Apr 21 10:30:30.203219 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 35798 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:30.204217 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:30.207284 systemd-logind[1562]: New session 6 of user core. Apr 21 10:30:30.216945 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:30:30.267087 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:30:30.267290 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:30:30.270062 sudo[1751]: pam_unix(sudo:session): session closed for user root Apr 21 10:30:30.274123 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:30:30.274341 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:30:30.292991 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:30:30.294336 auditctl[1754]: No rules Apr 21 10:30:30.294580 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:30:30.294820 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:30:30.296548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:30:30.318026 augenrules[1773]: No rules Apr 21 10:30:30.319089 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:30:30.319886 sudo[1750]: pam_unix(sudo:session): session closed for user root Apr 21 10:30:30.321129 sshd[1743]: pam_unix(sshd:session): session closed for user core Apr 21 10:30:30.327965 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:35800.service - OpenSSH per-connection server daemon (10.0.0.1:35800). Apr 21 10:30:30.328363 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:35798.service: Deactivated successfully. Apr 21 10:30:30.329605 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:30:30.330163 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:30:30.331093 systemd-logind[1562]: Removed session 6. Apr 21 10:30:30.357667 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 35800 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:30:30.358624 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:30:30.361792 systemd-logind[1562]: New session 7 of user core. Apr 21 10:30:30.371932 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:30:30.422142 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:30:30.422339 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:30:30.639966 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:30:30.640187 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:30:30.845927 dockerd[1804]: time="2026-04-21T10:30:30.845844219Z" level=info msg="Starting up" Apr 21 10:30:31.046573 dockerd[1804]: time="2026-04-21T10:30:31.046444679Z" level=info msg="Loading containers: start." Apr 21 10:30:31.139784 kernel: Initializing XFRM netlink socket Apr 21 10:30:31.214030 systemd-networkd[1253]: docker0: Link UP Apr 21 10:30:31.235239 dockerd[1804]: time="2026-04-21T10:30:31.235175280Z" level=info msg="Loading containers: done." Apr 21 10:30:31.248304 dockerd[1804]: time="2026-04-21T10:30:31.248253178Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:30:31.248410 dockerd[1804]: time="2026-04-21T10:30:31.248357993Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:30:31.248435 dockerd[1804]: time="2026-04-21T10:30:31.248428120Z" level=info msg="Daemon has completed initialization" Apr 21 10:30:31.276361 dockerd[1804]: time="2026-04-21T10:30:31.276281982Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:30:31.276454 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:30:31.645626 containerd[1584]: time="2026-04-21T10:30:31.645579927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:30:32.172606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83489185.mount: Deactivated successfully. Apr 21 10:30:32.775629 containerd[1584]: time="2026-04-21T10:30:32.775568093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:32.776095 containerd[1584]: time="2026-04-21T10:30:32.776035801Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 21 10:30:32.776995 containerd[1584]: time="2026-04-21T10:30:32.776962407Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:32.779286 containerd[1584]: time="2026-04-21T10:30:32.779249187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:32.780061 containerd[1584]: time="2026-04-21T10:30:32.780014518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.134395595s" Apr 21 10:30:32.780092 containerd[1584]: time="2026-04-21T10:30:32.780072104Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:30:32.780860 containerd[1584]: time="2026-04-21T10:30:32.780826028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:30:33.643171 containerd[1584]: time="2026-04-21T10:30:33.643107799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:33.644126 containerd[1584]: time="2026-04-21T10:30:33.644074622Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 21 10:30:33.644992 containerd[1584]: time="2026-04-21T10:30:33.644951907Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:33.646914 containerd[1584]: time="2026-04-21T10:30:33.646890131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:33.647773 containerd[1584]: time="2026-04-21T10:30:33.647728982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 866.871161ms" Apr 21 10:30:33.647811 containerd[1584]: time="2026-04-21T10:30:33.647773622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:30:33.648298 containerd[1584]: time="2026-04-21T10:30:33.648219837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:30:34.178282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:30:34.185901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:30:34.276889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:34.280218 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:30:34.316927 kubelet[2031]: E0421 10:30:34.316820 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:30:34.320052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:30:34.320193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:30:34.465439 containerd[1584]: time="2026-04-21T10:30:34.465272839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:34.466019 containerd[1584]: time="2026-04-21T10:30:34.465977237Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 21 10:30:34.467022 containerd[1584]: time="2026-04-21T10:30:34.466986670Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:34.469076 containerd[1584]: time="2026-04-21T10:30:34.469039327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:34.469848 containerd[1584]: time="2026-04-21T10:30:34.469820196Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 821.580345ms" Apr 21 10:30:34.469873 containerd[1584]: time="2026-04-21T10:30:34.469849820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:30:34.470317 containerd[1584]: time="2026-04-21T10:30:34.470294084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:30:35.290046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145611048.mount: Deactivated successfully. Apr 21 10:30:35.547362 containerd[1584]: time="2026-04-21T10:30:35.547200009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:35.547960 containerd[1584]: time="2026-04-21T10:30:35.547914106Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 21 10:30:35.548824 containerd[1584]: time="2026-04-21T10:30:35.548795254Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:35.550788 containerd[1584]: time="2026-04-21T10:30:35.550758055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:35.551266 containerd[1584]: time="2026-04-21T10:30:35.551229744Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.080905302s" Apr 21 10:30:35.551266 containerd[1584]: time="2026-04-21T10:30:35.551261683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:30:35.551791 containerd[1584]: time="2026-04-21T10:30:35.551715791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:30:36.023909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493553291.mount: Deactivated successfully. Apr 21 10:30:36.620014 containerd[1584]: time="2026-04-21T10:30:36.619944476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:36.620529 containerd[1584]: time="2026-04-21T10:30:36.620462514Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 21 10:30:36.621682 containerd[1584]: time="2026-04-21T10:30:36.621646974Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:36.624321 containerd[1584]: time="2026-04-21T10:30:36.624282115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:36.625339 containerd[1584]: time="2026-04-21T10:30:36.625301393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.073526342s" Apr 21 10:30:36.625339 containerd[1584]: time="2026-04-21T10:30:36.625339269Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:30:36.625865 containerd[1584]: time="2026-04-21T10:30:36.625842308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:30:37.054231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491230140.mount: Deactivated successfully. Apr 21 10:30:37.059250 containerd[1584]: time="2026-04-21T10:30:37.059183816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:37.059832 containerd[1584]: time="2026-04-21T10:30:37.059787041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 10:30:37.060794 containerd[1584]: time="2026-04-21T10:30:37.060768042Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:37.062627 containerd[1584]: time="2026-04-21T10:30:37.062582348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:37.063049 containerd[1584]: time="2026-04-21T10:30:37.063013119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 437.144088ms" Apr 21 10:30:37.063049 containerd[1584]: time="2026-04-21T10:30:37.063045547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:30:37.063560 containerd[1584]: time="2026-04-21T10:30:37.063527736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:30:37.506107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921588818.mount: Deactivated successfully. Apr 21 10:30:38.064060 containerd[1584]: time="2026-04-21T10:30:38.064003031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:38.064681 containerd[1584]: time="2026-04-21T10:30:38.064637829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 21 10:30:38.065539 containerd[1584]: time="2026-04-21T10:30:38.065502329Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:38.067837 containerd[1584]: time="2026-04-21T10:30:38.067809461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:38.068700 containerd[1584]: time="2026-04-21T10:30:38.068676281Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.005122817s" Apr 21 10:30:38.068757 containerd[1584]: time="2026-04-21T10:30:38.068706291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:30:39.773894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:39.781019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:30:39.802526 systemd[1]: Reloading requested from client PID 2199 ('systemctl') (unit session-7.scope)... Apr 21 10:30:39.802547 systemd[1]: Reloading... Apr 21 10:30:39.851829 zram_generator::config[2244]: No configuration found. Apr 21 10:30:39.927035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:30:39.970771 systemd[1]: Reloading finished in 167 ms. Apr 21 10:30:40.009931 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:30:40.009976 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:30:40.010181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:40.011813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:30:40.103273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:40.107001 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:30:40.156114 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:30:40.156114 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:30:40.156114 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:30:40.156114 kubelet[2299]: I0421 10:30:40.155950 2299 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:30:40.309535 kubelet[2299]: I0421 10:30:40.309469 2299 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:30:40.309535 kubelet[2299]: I0421 10:30:40.309507 2299 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:30:40.309830 kubelet[2299]: I0421 10:30:40.309801 2299 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:30:40.338767 kubelet[2299]: E0421 10:30:40.338683 2299 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:30:40.342453 kubelet[2299]: I0421 10:30:40.342380 2299 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:30:40.353551 kubelet[2299]: E0421 10:30:40.353430 2299 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:30:40.353551 kubelet[2299]: I0421 10:30:40.353461 2299 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:30:40.358009 kubelet[2299]: I0421 10:30:40.357978 2299 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:30:40.359332 kubelet[2299]: I0421 10:30:40.359137 2299 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:30:40.359874 kubelet[2299]: I0421 10:30:40.359262 2299 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:30:40.360190 kubelet[2299]: I0421 10:30:40.359880 2299 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:30:40.360190 kubelet[2299]: I0421 10:30:40.359890 2299 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:30:40.360190 kubelet[2299]: I0421 10:30:40.360033 2299 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:30:40.364291 kubelet[2299]: I0421 10:30:40.364239 2299 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:30:40.364291 kubelet[2299]: I0421 10:30:40.364261 2299 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:30:40.364291 kubelet[2299]: I0421 10:30:40.364281 2299 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:30:40.366871 kubelet[2299]: I0421 10:30:40.366696 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:30:40.369850 kubelet[2299]: I0421 10:30:40.369819 2299 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:30:40.370346 kubelet[2299]: I0421 10:30:40.370329 2299 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:30:40.371892 kubelet[2299]: W0421 10:30:40.371858 2299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:30:40.378302 kubelet[2299]: E0421 10:30:40.378134 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:30:40.378302 kubelet[2299]: E0421 10:30:40.378184 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:30:40.379044 kubelet[2299]: I0421 10:30:40.379010 2299 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:30:40.379107 kubelet[2299]: I0421 10:30:40.379055 2299 server.go:1289] "Started kubelet" Apr 21 10:30:40.381060 kubelet[2299]: I0421 10:30:40.380990 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:30:40.381839 kubelet[2299]: I0421 10:30:40.381165 2299 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:30:40.381972 kubelet[2299]: I0421 10:30:40.381952 2299 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:30:40.382063 kubelet[2299]: I0421 10:30:40.382050 2299 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:30:40.382121 kubelet[2299]: I0421 10:30:40.382107 2299 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:30:40.383531 kubelet[2299]: I0421 10:30:40.382732 2299 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:30:40.383531 kubelet[2299]: I0421 10:30:40.382824 2299 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:30:40.383686 kubelet[2299]: E0421 10:30:40.383663 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:30:40.392763 kubelet[2299]: I0421 10:30:40.382115 2299 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:30:40.392763 kubelet[2299]: I0421 10:30:40.390040 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:30:40.392763 kubelet[2299]: I0421 10:30:40.390463 2299 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:30:40.392763 kubelet[2299]: I0421 10:30:40.390981 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:30:40.396146 kubelet[2299]: E0421 10:30:40.394547 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:30:40.396146 kubelet[2299]: E0421 10:30:40.394941 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Apr 21 10:30:40.442141 kubelet[2299]: I0421 10:30:40.442042 2299 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:30:40.444014 kubelet[2299]: E0421 10:30:40.442378 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a85892c445a37c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:30:40.379028348 +0000 UTC m=+0.263002643,LastTimestamp:2026-04-21 10:30:40.379028348 +0000 UTC m=+0.263002643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:30:40.459247 kubelet[2299]: I0421 10:30:40.459180 2299 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:30:40.460288 kubelet[2299]: I0421 10:30:40.460270 2299 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:30:40.460351 kubelet[2299]: I0421 10:30:40.460295 2299 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:30:40.460351 kubelet[2299]: I0421 10:30:40.460327 2299 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:30:40.460351 kubelet[2299]: I0421 10:30:40.460334 2299 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:30:40.460406 kubelet[2299]: E0421 10:30:40.460392 2299 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:30:40.460406 kubelet[2299]: I0421 10:30:40.460399 2299 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:30:40.460445 kubelet[2299]: I0421 10:30:40.460408 2299 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:30:40.460445 kubelet[2299]: I0421 10:30:40.460420 2299 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:30:40.461568 kubelet[2299]: E0421 10:30:40.461537 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:30:40.495585 kubelet[2299]: E0421 10:30:40.495519 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:30:40.505458 kubelet[2299]: I0421 10:30:40.505404 2299 policy_none.go:49] "None policy: Start" Apr 21 10:30:40.505458 kubelet[2299]: I0421 10:30:40.505449 2299 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:30:40.505458 kubelet[2299]: I0421 10:30:40.505461 2299 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:30:40.509182 kubelet[2299]: E0421 10:30:40.509167 2299 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:30:40.509679 kubelet[2299]: I0421 10:30:40.509333 2299 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:30:40.509679 kubelet[2299]: I0421 10:30:40.509343 2299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:30:40.509679 kubelet[2299]: I0421 10:30:40.509527 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:30:40.512340 kubelet[2299]: E0421 10:30:40.512308 2299 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:30:40.512340 kubelet[2299]: E0421 10:30:40.512334 2299 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:30:40.566959 kubelet[2299]: E0421 10:30:40.566915 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:40.569605 kubelet[2299]: E0421 10:30:40.569581 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:40.572529 kubelet[2299]: E0421 10:30:40.572495 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:40.591303 kubelet[2299]: I0421 10:30:40.591251 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ef11cd538f35225d482b8ed2c167f71-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ef11cd538f35225d482b8ed2c167f71\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:40.591428 kubelet[2299]: I0421 10:30:40.591344 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ef11cd538f35225d482b8ed2c167f71-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ef11cd538f35225d482b8ed2c167f71\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:40.591428 kubelet[2299]: I0421 10:30:40.591372 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ef11cd538f35225d482b8ed2c167f71-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ef11cd538f35225d482b8ed2c167f71\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:40.591428 kubelet[2299]: I0421 10:30:40.591392 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:40.591428 kubelet[2299]: I0421 10:30:40.591410 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:40.591525 kubelet[2299]: I0421 10:30:40.591437 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:40.591525 kubelet[2299]: I0421 10:30:40.591466 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:40.591525 kubelet[2299]: I0421 10:30:40.591510 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:40.591582 kubelet[2299]: I0421 10:30:40.591544 2299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:40.595809 kubelet[2299]: E0421 10:30:40.595732 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Apr 21 10:30:40.611137 kubelet[2299]: I0421 10:30:40.610995 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:30:40.611367 kubelet[2299]: E0421 10:30:40.611334 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Apr 21 10:30:40.813038 kubelet[2299]: I0421 10:30:40.812947 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:30:40.813394 kubelet[2299]: E0421 10:30:40.813334 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Apr 21 10:30:40.867627 kubelet[2299]: E0421 10:30:40.867484 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:40.868395 containerd[1584]: time="2026-04-21T10:30:40.868333093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ef11cd538f35225d482b8ed2c167f71,Namespace:kube-system,Attempt:0,}" Apr 21 10:30:40.869870 kubelet[2299]: E0421 10:30:40.869851 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:40.870223 containerd[1584]: time="2026-04-21T10:30:40.870190072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 21 10:30:40.872813 kubelet[2299]: E0421 10:30:40.872793 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:40.873103 containerd[1584]: time="2026-04-21T10:30:40.873065016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 21 10:30:40.997218 kubelet[2299]: E0421 10:30:40.997139 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Apr 21 10:30:41.214931 kubelet[2299]: I0421 10:30:41.214893 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:30:41.215252 kubelet[2299]: E0421 10:30:41.215210 2299 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Apr 21 10:30:41.222495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257171151.mount: Deactivated successfully. Apr 21 10:30:41.228205 containerd[1584]: time="2026-04-21T10:30:41.228147998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:30:41.230484 containerd[1584]: time="2026-04-21T10:30:41.230446230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:30:41.231230 containerd[1584]: time="2026-04-21T10:30:41.231201229Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:30:41.231909 containerd[1584]: time="2026-04-21T10:30:41.231866090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:30:41.232594 containerd[1584]: time="2026-04-21T10:30:41.232555962Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:30:41.233144 containerd[1584]: time="2026-04-21T10:30:41.233097399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:30:41.233786 containerd[1584]: time="2026-04-21T10:30:41.233765781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:30:41.234916 containerd[1584]: time="2026-04-21T10:30:41.234890272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:30:41.236302 containerd[1584]: time="2026-04-21T10:30:41.236261115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 367.841322ms" Apr 21 10:30:41.236752 containerd[1584]: time="2026-04-21T10:30:41.236719786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 366.470176ms" Apr 21 10:30:41.238600 containerd[1584]: time="2026-04-21T10:30:41.238562985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 365.431327ms" Apr 21 10:30:41.284308 kubelet[2299]: E0421 10:30:41.284252 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:30:41.534100 containerd[1584]: time="2026-04-21T10:30:41.533869524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:30:41.534100 containerd[1584]: time="2026-04-21T10:30:41.533963322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:30:41.534608 containerd[1584]: time="2026-04-21T10:30:41.533981407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:41.534608 containerd[1584]: time="2026-04-21T10:30:41.534418229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:41.537535 containerd[1584]: time="2026-04-21T10:30:41.537477228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:30:41.537604 containerd[1584]: time="2026-04-21T10:30:41.537568037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:30:41.537622 containerd[1584]: time="2026-04-21T10:30:41.537603568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:41.537707 containerd[1584]: time="2026-04-21T10:30:41.537684469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:41.547171 containerd[1584]: time="2026-04-21T10:30:41.547082282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:30:41.547171 containerd[1584]: time="2026-04-21T10:30:41.547136879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:30:41.547171 containerd[1584]: time="2026-04-21T10:30:41.547157832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:41.547327 containerd[1584]: time="2026-04-21T10:30:41.547229981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:41.660432 kernel: hrtimer: interrupt took 3246325 ns Apr 21 10:30:41.698577 containerd[1584]: time="2026-04-21T10:30:41.698501924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"54b93ebf784d53bd83125a50b6f04db6fff98980f4cd8bba9df2282f36951d3d\"" Apr 21 10:30:41.700396 kubelet[2299]: E0421 10:30:41.700377 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:41.702923 containerd[1584]: time="2026-04-21T10:30:41.702846805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ef11cd538f35225d482b8ed2c167f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"22d35657a3595067d4c3ddd98c1f2a792f6fd92a86a1ccbadd2689d3e14aaaa4\"" Apr 21 10:30:41.703877 kubelet[2299]: E0421 10:30:41.703859 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:41.707418 containerd[1584]: time="2026-04-21T10:30:41.707340964Z" level=info msg="CreateContainer within sandbox \"54b93ebf784d53bd83125a50b6f04db6fff98980f4cd8bba9df2282f36951d3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:30:41.708928 containerd[1584]: time="2026-04-21T10:30:41.708909915Z" level=info msg="CreateContainer within sandbox \"22d35657a3595067d4c3ddd98c1f2a792f6fd92a86a1ccbadd2689d3e14aaaa4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:30:41.710047 containerd[1584]: time="2026-04-21T10:30:41.710007536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"74ba338fdd2b91432de11e43f77649068d84dd00dc335dfed15389964af0a572\"" Apr 21 10:30:41.710685 kubelet[2299]: E0421 10:30:41.710673 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:41.714787 containerd[1584]: time="2026-04-21T10:30:41.714705158Z" level=info msg="CreateContainer within sandbox \"74ba338fdd2b91432de11e43f77649068d84dd00dc335dfed15389964af0a572\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:30:41.726487 containerd[1584]: time="2026-04-21T10:30:41.726417141Z" level=info msg="CreateContainer within sandbox \"22d35657a3595067d4c3ddd98c1f2a792f6fd92a86a1ccbadd2689d3e14aaaa4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9d3b5b8587db67151d6f200d21e85f04f0db948a0ad2e0cee8bdfbbe9f3dfea1\"" Apr 21 10:30:41.727113 containerd[1584]: time="2026-04-21T10:30:41.727046135Z" level=info msg="StartContainer for \"9d3b5b8587db67151d6f200d21e85f04f0db948a0ad2e0cee8bdfbbe9f3dfea1\"" Apr 21 10:30:41.727213 containerd[1584]: time="2026-04-21T10:30:41.727048553Z" level=info msg="CreateContainer within sandbox \"54b93ebf784d53bd83125a50b6f04db6fff98980f4cd8bba9df2282f36951d3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aba28103048c5e54e2077ff94c9b92211c6efb1a75c4060b8e33e73572b951d1\"" Apr 21 10:30:41.728015 containerd[1584]: time="2026-04-21T10:30:41.727976832Z" level=info msg="StartContainer for \"aba28103048c5e54e2077ff94c9b92211c6efb1a75c4060b8e33e73572b951d1\"" Apr 21 10:30:41.734987 containerd[1584]: time="2026-04-21T10:30:41.734932458Z" level=info msg="CreateContainer within sandbox \"74ba338fdd2b91432de11e43f77649068d84dd00dc335dfed15389964af0a572\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3e9a3e3a90ea9f28a50ba441687fc126e2044c5b889f31cf875c246af9faf3b\"" Apr 21 10:30:41.736064 containerd[1584]: time="2026-04-21T10:30:41.735846757Z" level=info msg="StartContainer for \"f3e9a3e3a90ea9f28a50ba441687fc126e2044c5b889f31cf875c246af9faf3b\"" Apr 21 10:30:41.798002 kubelet[2299]: E0421 10:30:41.797826 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Apr 21 10:30:41.823990 containerd[1584]: time="2026-04-21T10:30:41.823940586Z" level=info msg="StartContainer for \"9d3b5b8587db67151d6f200d21e85f04f0db948a0ad2e0cee8bdfbbe9f3dfea1\" returns successfully" Apr 21 10:30:41.824467 containerd[1584]: time="2026-04-21T10:30:41.824105815Z" level=info msg="StartContainer for \"f3e9a3e3a90ea9f28a50ba441687fc126e2044c5b889f31cf875c246af9faf3b\" returns successfully" Apr 21 10:30:41.824467 containerd[1584]: time="2026-04-21T10:30:41.824134843Z" level=info msg="StartContainer for \"aba28103048c5e54e2077ff94c9b92211c6efb1a75c4060b8e33e73572b951d1\" returns successfully" Apr 21 10:30:41.833055 kubelet[2299]: E0421 10:30:41.832925 2299 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:30:42.020030 kubelet[2299]: I0421 10:30:42.019980 2299 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:30:42.500726 kubelet[2299]: E0421 10:30:42.500687 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:42.501155 kubelet[2299]: E0421 10:30:42.500908 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:42.501411 kubelet[2299]: E0421 10:30:42.501374 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:42.501679 kubelet[2299]: E0421 10:30:42.501492 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:42.503278 kubelet[2299]: E0421 10:30:42.503234 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:42.503846 kubelet[2299]: E0421 10:30:42.503345 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:43.702582 kubelet[2299]: E0421 10:30:43.702533 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:43.702946 kubelet[2299]: E0421 10:30:43.702666 2299 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:30:43.702946 kubelet[2299]: E0421 10:30:43.702681 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:43.702946 kubelet[2299]: E0421 10:30:43.702832 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:43.765170 kubelet[2299]: E0421 10:30:43.765116 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:30:43.862227 kubelet[2299]: I0421 10:30:43.862181 2299 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:30:43.862227 kubelet[2299]: E0421 10:30:43.862225 2299 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:30:43.873275 kubelet[2299]: E0421 10:30:43.873195 2299 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:30:43.995651 kubelet[2299]: I0421 10:30:43.995486 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:44.005245 kubelet[2299]: E0421 10:30:44.005200 2299 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:44.005245 kubelet[2299]: I0421 10:30:44.005230 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:44.006863 kubelet[2299]: E0421 10:30:44.006773 2299 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:44.006863 kubelet[2299]: I0421 10:30:44.006794 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:44.007880 kubelet[2299]: E0421 10:30:44.007863 2299 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:44.385382 kubelet[2299]: I0421 10:30:44.385250 2299 apiserver.go:52] "Watching apiserver" Apr 21 10:30:44.483227 kubelet[2299]: I0421 10:30:44.483133 2299 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:30:44.703482 kubelet[2299]: I0421 10:30:44.703432 2299 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:44.705321 kubelet[2299]: E0421 10:30:44.705295 2299 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:44.705574 kubelet[2299]: E0421 10:30:44.705431 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:46.052235 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-7.scope)... Apr 21 10:30:46.052257 systemd[1]: Reloading... Apr 21 10:30:46.169864 zram_generator::config[2625]: No configuration found. Apr 21 10:30:46.244611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:30:46.294460 systemd[1]: Reloading finished in 241 ms. Apr 21 10:30:46.322830 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:30:46.342915 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:30:46.343228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:46.353198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:30:46.447513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:30:46.450950 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:30:46.492712 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:30:46.492712 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:30:46.492712 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:30:46.493066 kubelet[2677]: I0421 10:30:46.492775 2677 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:30:46.497541 kubelet[2677]: I0421 10:30:46.497497 2677 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:30:46.497541 kubelet[2677]: I0421 10:30:46.497527 2677 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:30:46.500055 kubelet[2677]: I0421 10:30:46.500040 2677 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:30:46.501317 kubelet[2677]: I0421 10:30:46.501290 2677 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:30:46.502891 kubelet[2677]: I0421 10:30:46.502871 2677 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:30:46.505453 kubelet[2677]: E0421 10:30:46.505420 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:30:46.505453 kubelet[2677]: I0421 10:30:46.505452 2677 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:30:46.509332 kubelet[2677]: I0421 10:30:46.509286 2677 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:30:46.509804 kubelet[2677]: I0421 10:30:46.509768 2677 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:30:46.509941 kubelet[2677]: I0421 10:30:46.509795 2677 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:30:46.509941 kubelet[2677]: I0421 10:30:46.509936 2677 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:30:46.510129 kubelet[2677]: I0421 10:30:46.509943 2677 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:30:46.510129 kubelet[2677]: I0421 10:30:46.509979 2677 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:30:46.510158 kubelet[2677]: I0421 10:30:46.510154 2677 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:30:46.510173 kubelet[2677]: I0421 10:30:46.510164 2677 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:30:46.510186 kubelet[2677]: I0421 10:30:46.510184 2677 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:30:46.510212 kubelet[2677]: I0421 10:30:46.510194 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:30:46.511521 kubelet[2677]: I0421 10:30:46.511489 2677 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:30:46.513234 kubelet[2677]: I0421 10:30:46.512939 2677 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:30:46.518083 kubelet[2677]: I0421 10:30:46.518069 2677 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:30:46.518271 kubelet[2677]: I0421 10:30:46.518262 2677 server.go:1289] "Started kubelet" Apr 21 10:30:46.519660 kubelet[2677]: I0421 10:30:46.519639 2677 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:30:46.521309 kubelet[2677]: I0421 10:30:46.520035 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:30:46.521538 kubelet[2677]: I0421 10:30:46.521500 2677 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:30:46.521562 kubelet[2677]: I0421 10:30:46.520168 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:30:46.522017 kubelet[2677]: I0421 10:30:46.520307 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:30:46.523987 kubelet[2677]: I0421 10:30:46.523857 2677 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:30:46.526834 kubelet[2677]: I0421 10:30:46.526807 2677 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:30:46.526898 kubelet[2677]: I0421 10:30:46.526880 2677 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:30:46.527006 kubelet[2677]: I0421 10:30:46.526993 2677 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:30:46.528936 kubelet[2677]: I0421 10:30:46.528204 2677 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:30:46.529310 kubelet[2677]: E0421 10:30:46.529297 2677 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:30:46.531820 kubelet[2677]: I0421 10:30:46.530023 2677 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:30:46.531820 kubelet[2677]: I0421 10:30:46.530034 2677 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:30:46.534590 kubelet[2677]: I0421 10:30:46.534464 2677 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:30:46.535391 kubelet[2677]: I0421 10:30:46.535381 2677 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:30:46.535434 kubelet[2677]: I0421 10:30:46.535430 2677 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:30:46.535470 kubelet[2677]: I0421 10:30:46.535464 2677 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:30:46.535495 kubelet[2677]: I0421 10:30:46.535492 2677 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:30:46.535579 kubelet[2677]: E0421 10:30:46.535568 2677 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:30:46.567660 kubelet[2677]: I0421 10:30:46.567630 2677 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:30:46.567660 kubelet[2677]: I0421 10:30:46.567642 2677 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:30:46.567660 kubelet[2677]: I0421 10:30:46.567656 2677 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:30:46.567849 kubelet[2677]: I0421 10:30:46.567830 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:30:46.567868 kubelet[2677]: I0421 10:30:46.567843 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:30:46.567868 kubelet[2677]: I0421 10:30:46.567856 2677 policy_none.go:49] "None policy: Start" Apr 21 10:30:46.567913 kubelet[2677]: I0421 10:30:46.567871 2677 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:30:46.567913 kubelet[2677]: I0421 10:30:46.567879 2677 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:30:46.567983 kubelet[2677]: I0421 10:30:46.567966 2677 state_mem.go:75] "Updated machine memory state" Apr 21 10:30:46.569085 kubelet[2677]: E0421 10:30:46.569054 2677 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:30:46.569288 kubelet[2677]: I0421 10:30:46.569276 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:30:46.569313 kubelet[2677]: I0421 10:30:46.569289 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:30:46.569876 kubelet[2677]: I0421 10:30:46.569525 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:30:46.570303 kubelet[2677]: E0421 10:30:46.570275 2677 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:30:46.637626 kubelet[2677]: I0421 10:30:46.637516 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:46.637626 kubelet[2677]: I0421 10:30:46.637540 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:46.637626 kubelet[2677]: I0421 10:30:46.637586 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:46.676035 kubelet[2677]: I0421 10:30:46.675976 2677 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:30:46.684524 kubelet[2677]: I0421 10:30:46.684490 2677 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 10:30:46.684652 kubelet[2677]: I0421 10:30:46.684570 2677 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:30:46.828603 kubelet[2677]: I0421 10:30:46.828494 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:46.828603 kubelet[2677]: I0421 10:30:46.828534 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:30:46.828603 kubelet[2677]: I0421 10:30:46.828556 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:46.828603 kubelet[2677]: I0421 10:30:46.828569 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:46.828603 kubelet[2677]: I0421 10:30:46.828583 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:46.828895 kubelet[2677]: I0421 10:30:46.828596 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ef11cd538f35225d482b8ed2c167f71-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ef11cd538f35225d482b8ed2c167f71\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:46.828895 kubelet[2677]: I0421 10:30:46.828607 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ef11cd538f35225d482b8ed2c167f71-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ef11cd538f35225d482b8ed2c167f71\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:46.828895 kubelet[2677]: I0421 10:30:46.828621 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ef11cd538f35225d482b8ed2c167f71-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ef11cd538f35225d482b8ed2c167f71\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:46.828895 kubelet[2677]: I0421 10:30:46.828632 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:30:46.956469 kubelet[2677]: E0421 10:30:46.956301 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:46.956469 kubelet[2677]: E0421 10:30:46.956380 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:46.956469 kubelet[2677]: E0421 10:30:46.956415 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:47.510715 kubelet[2677]: I0421 10:30:47.510668 2677 apiserver.go:52] "Watching apiserver" Apr 21 10:30:47.527817 kubelet[2677]: I0421 10:30:47.527765 2677 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:30:47.546837 kubelet[2677]: I0421 10:30:47.546599 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:47.546837 kubelet[2677]: E0421 10:30:47.546706 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:47.546979 kubelet[2677]: E0421 10:30:47.546902 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:47.553567 kubelet[2677]: E0421 10:30:47.552730 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:30:47.553567 kubelet[2677]: E0421 10:30:47.552868 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:47.574705 kubelet[2677]: I0421 10:30:47.574631 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5745826520000001 podStartE2EDuration="1.574582652s" podCreationTimestamp="2026-04-21 10:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:30:47.567280365 +0000 UTC m=+1.110662747" watchObservedRunningTime="2026-04-21 10:30:47.574582652 +0000 UTC m=+1.117965098" Apr 21 10:30:47.587802 kubelet[2677]: I0421 10:30:47.586964 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.58691518 podStartE2EDuration="1.58691518s" podCreationTimestamp="2026-04-21 10:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:30:47.574794407 +0000 UTC m=+1.118176789" watchObservedRunningTime="2026-04-21 10:30:47.58691518 +0000 UTC m=+1.130297555" Apr 21 10:30:47.588158 kubelet[2677]: I0421 10:30:47.588052 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5880413789999999 podStartE2EDuration="1.588041379s" podCreationTimestamp="2026-04-21 10:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:30:47.586683919 +0000 UTC m=+1.130066304" watchObservedRunningTime="2026-04-21 10:30:47.588041379 +0000 UTC m=+1.131423763" Apr 21 10:30:48.548396 kubelet[2677]: E0421 10:30:48.548336 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:48.548696 kubelet[2677]: E0421 10:30:48.548496 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:49.549438 kubelet[2677]: E0421 10:30:49.549396 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:49.549972 kubelet[2677]: E0421 10:30:49.549473 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:50.662242 kubelet[2677]: E0421 10:30:50.662167 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:51.618717 kubelet[2677]: I0421 10:30:51.618621 2677 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:30:51.619046 containerd[1584]: time="2026-04-21T10:30:51.619014449Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:30:51.619330 kubelet[2677]: I0421 10:30:51.619214 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:30:52.161708 kubelet[2677]: I0421 10:30:52.161651 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5fbcf55-a099-4b39-9fc0-8178c04a8b4d-kube-proxy\") pod \"kube-proxy-qfsld\" (UID: \"c5fbcf55-a099-4b39-9fc0-8178c04a8b4d\") " pod="kube-system/kube-proxy-qfsld" Apr 21 10:30:52.161708 kubelet[2677]: I0421 10:30:52.161687 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlf4\" (UniqueName: \"kubernetes.io/projected/c5fbcf55-a099-4b39-9fc0-8178c04a8b4d-kube-api-access-pmlf4\") pod \"kube-proxy-qfsld\" (UID: \"c5fbcf55-a099-4b39-9fc0-8178c04a8b4d\") " pod="kube-system/kube-proxy-qfsld" Apr 21 10:30:52.161708 kubelet[2677]: I0421 10:30:52.161703 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5fbcf55-a099-4b39-9fc0-8178c04a8b4d-xtables-lock\") pod \"kube-proxy-qfsld\" (UID: \"c5fbcf55-a099-4b39-9fc0-8178c04a8b4d\") " pod="kube-system/kube-proxy-qfsld" Apr 21 10:30:52.161708 kubelet[2677]: I0421 10:30:52.161715 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5fbcf55-a099-4b39-9fc0-8178c04a8b4d-lib-modules\") pod \"kube-proxy-qfsld\" (UID: \"c5fbcf55-a099-4b39-9fc0-8178c04a8b4d\") " pod="kube-system/kube-proxy-qfsld" Apr 21 10:30:52.419235 kubelet[2677]: E0421 10:30:52.419101 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:52.419839 containerd[1584]: time="2026-04-21T10:30:52.419782426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qfsld,Uid:c5fbcf55-a099-4b39-9fc0-8178c04a8b4d,Namespace:kube-system,Attempt:0,}" Apr 21 10:30:52.456422 containerd[1584]: time="2026-04-21T10:30:52.456036297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:30:52.456422 containerd[1584]: time="2026-04-21T10:30:52.456153846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:30:52.456422 containerd[1584]: time="2026-04-21T10:30:52.456163526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:52.456422 containerd[1584]: time="2026-04-21T10:30:52.456279386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:52.752084 containerd[1584]: time="2026-04-21T10:30:52.752027235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qfsld,Uid:c5fbcf55-a099-4b39-9fc0-8178c04a8b4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b71a2e34337d1e7f00a101321125cbc4cff0940b78b6c53f36d3a71b2170d72a\"" Apr 21 10:30:52.752874 kubelet[2677]: E0421 10:30:52.752841 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:52.761070 containerd[1584]: time="2026-04-21T10:30:52.761030613Z" level=info msg="CreateContainer within sandbox \"b71a2e34337d1e7f00a101321125cbc4cff0940b78b6c53f36d3a71b2170d72a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:30:52.775083 containerd[1584]: time="2026-04-21T10:30:52.774947889Z" level=info msg="CreateContainer within sandbox \"b71a2e34337d1e7f00a101321125cbc4cff0940b78b6c53f36d3a71b2170d72a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84ef3e88de94074554c1cfdd2492ca84e9f3fa7bb7021934c446ba268a365098\"" Apr 21 10:30:52.776589 containerd[1584]: time="2026-04-21T10:30:52.775617440Z" level=info msg="StartContainer for \"84ef3e88de94074554c1cfdd2492ca84e9f3fa7bb7021934c446ba268a365098\"" Apr 21 10:30:52.845229 containerd[1584]: time="2026-04-21T10:30:52.845196566Z" level=info msg="StartContainer for \"84ef3e88de94074554c1cfdd2492ca84e9f3fa7bb7021934c446ba268a365098\" returns successfully" Apr 21 10:30:52.922427 kubelet[2677]: I0421 10:30:52.922385 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swt69\" (UniqueName: \"kubernetes.io/projected/0ca72d4e-9158-4fc8-a142-487dc3140a8a-kube-api-access-swt69\") pod \"tigera-operator-6bf85f8dd-pjsg5\" (UID: \"0ca72d4e-9158-4fc8-a142-487dc3140a8a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-pjsg5" Apr 21 10:30:52.922427 kubelet[2677]: I0421 10:30:52.922424 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ca72d4e-9158-4fc8-a142-487dc3140a8a-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-pjsg5\" (UID: \"0ca72d4e-9158-4fc8-a142-487dc3140a8a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-pjsg5" Apr 21 10:30:53.134116 containerd[1584]: time="2026-04-21T10:30:53.133827786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-pjsg5,Uid:0ca72d4e-9158-4fc8-a142-487dc3140a8a,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:30:53.154172 containerd[1584]: time="2026-04-21T10:30:53.154091943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:30:53.154296 containerd[1584]: time="2026-04-21T10:30:53.154127930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:30:53.154847 containerd[1584]: time="2026-04-21T10:30:53.154312964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:53.154847 containerd[1584]: time="2026-04-21T10:30:53.154799311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:30:53.196254 containerd[1584]: time="2026-04-21T10:30:53.196224981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-pjsg5,Uid:0ca72d4e-9158-4fc8-a142-487dc3140a8a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ee69748c46e9f7cfc332a8a0fa86b175b51ec02c01e9b78ad4e64649157c2be3\"" Apr 21 10:30:53.197266 containerd[1584]: time="2026-04-21T10:30:53.197228972Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:30:53.633983 kubelet[2677]: E0421 10:30:53.633958 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:54.531055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218360732.mount: Deactivated successfully. Apr 21 10:30:55.238418 containerd[1584]: time="2026-04-21T10:30:55.238360588Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:55.239082 containerd[1584]: time="2026-04-21T10:30:55.239025425Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:30:55.242190 containerd[1584]: time="2026-04-21T10:30:55.241998414Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:55.245070 containerd[1584]: time="2026-04-21T10:30:55.245015541Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:30:55.245576 containerd[1584]: time="2026-04-21T10:30:55.245534695Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.048270725s" Apr 21 10:30:55.245576 containerd[1584]: time="2026-04-21T10:30:55.245570033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:30:55.250901 containerd[1584]: time="2026-04-21T10:30:55.250842330Z" level=info msg="CreateContainer within sandbox \"ee69748c46e9f7cfc332a8a0fa86b175b51ec02c01e9b78ad4e64649157c2be3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:30:55.259893 containerd[1584]: time="2026-04-21T10:30:55.259846397Z" level=info msg="CreateContainer within sandbox \"ee69748c46e9f7cfc332a8a0fa86b175b51ec02c01e9b78ad4e64649157c2be3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8df8429be7e8ad39a2798e7ec12db312f2b42cc6c4ad32b48265188a902e8534\"" Apr 21 10:30:55.260317 containerd[1584]: time="2026-04-21T10:30:55.260278407Z" level=info msg="StartContainer for \"8df8429be7e8ad39a2798e7ec12db312f2b42cc6c4ad32b48265188a902e8534\"" Apr 21 10:30:55.295430 containerd[1584]: time="2026-04-21T10:30:55.295346795Z" level=info msg="StartContainer for \"8df8429be7e8ad39a2798e7ec12db312f2b42cc6c4ad32b48265188a902e8534\" returns successfully" Apr 21 10:30:55.646794 kubelet[2677]: I0421 10:30:55.646628 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qfsld" podStartSLOduration=3.6466101589999997 podStartE2EDuration="3.646610159s" podCreationTimestamp="2026-04-21 10:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:30:53.641066608 +0000 UTC m=+7.184448992" watchObservedRunningTime="2026-04-21 10:30:55.646610159 +0000 UTC m=+9.189992545" Apr 21 10:30:55.646794 kubelet[2677]: I0421 10:30:55.646762 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-pjsg5" podStartSLOduration=1.59667146 podStartE2EDuration="3.646755293s" podCreationTimestamp="2026-04-21 10:30:52 +0000 UTC" firstStartedPulling="2026-04-21 10:30:53.196966712 +0000 UTC m=+6.740349086" lastFinishedPulling="2026-04-21 10:30:55.247050545 +0000 UTC m=+8.790432919" observedRunningTime="2026-04-21 10:30:55.645626525 +0000 UTC m=+9.189008907" watchObservedRunningTime="2026-04-21 10:30:55.646755293 +0000 UTC m=+9.190137678" Apr 21 10:30:57.838635 kubelet[2677]: E0421 10:30:57.838587 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:58.473153 kubelet[2677]: E0421 10:30:58.473083 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:30:58.645795 kubelet[2677]: E0421 10:30:58.645677 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:00.468700 sudo[1786]: pam_unix(sudo:session): session closed for user root Apr 21 10:31:00.472029 sshd[1779]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:00.479162 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:35800.service: Deactivated successfully. Apr 21 10:31:00.488077 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:31:00.488310 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:31:00.491885 systemd-logind[1562]: Removed session 7. Apr 21 10:31:00.668341 kubelet[2677]: E0421 10:31:00.668084 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:02.121190 kubelet[2677]: I0421 10:31:02.121138 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c8907ac-1323-47de-82c7-498eab817b69-tigera-ca-bundle\") pod \"calico-typha-b89c5f944-vfx7f\" (UID: \"9c8907ac-1323-47de-82c7-498eab817b69\") " pod="calico-system/calico-typha-b89c5f944-vfx7f" Apr 21 10:31:02.121190 kubelet[2677]: I0421 10:31:02.121189 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-lib-modules\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124206 kubelet[2677]: I0421 10:31:02.121208 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksx8\" (UniqueName: \"kubernetes.io/projected/9c8907ac-1323-47de-82c7-498eab817b69-kube-api-access-gksx8\") pod \"calico-typha-b89c5f944-vfx7f\" (UID: \"9c8907ac-1323-47de-82c7-498eab817b69\") " pod="calico-system/calico-typha-b89c5f944-vfx7f" Apr 21 10:31:02.124206 kubelet[2677]: I0421 10:31:02.121224 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1b4388b-4662-4431-9481-53f95c0e0fb7-node-certs\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124206 kubelet[2677]: I0421 10:31:02.121237 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-var-lib-calico\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124206 kubelet[2677]: I0421 10:31:02.121251 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-policysync\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124206 kubelet[2677]: I0421 10:31:02.121267 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-var-run-calico\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124366 kubelet[2677]: I0421 10:31:02.121405 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-xtables-lock\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124366 kubelet[2677]: I0421 10:31:02.121510 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-cni-log-dir\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124366 kubelet[2677]: I0421 10:31:02.121567 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-flexvol-driver-host\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124366 kubelet[2677]: I0421 10:31:02.121600 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c8907ac-1323-47de-82c7-498eab817b69-typha-certs\") pod \"calico-typha-b89c5f944-vfx7f\" (UID: \"9c8907ac-1323-47de-82c7-498eab817b69\") " pod="calico-system/calico-typha-b89c5f944-vfx7f" Apr 21 10:31:02.124366 kubelet[2677]: I0421 10:31:02.121616 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-bpffs\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124480 kubelet[2677]: I0421 10:31:02.121633 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-cni-bin-dir\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124480 kubelet[2677]: I0421 10:31:02.121647 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-nodeproc\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124480 kubelet[2677]: I0421 10:31:02.121705 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-sys-fs\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124480 kubelet[2677]: I0421 10:31:02.121764 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1b4388b-4662-4431-9481-53f95c0e0fb7-tigera-ca-bundle\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124480 kubelet[2677]: I0421 10:31:02.121786 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfglc\" (UniqueName: \"kubernetes.io/projected/d1b4388b-4662-4431-9481-53f95c0e0fb7-kube-api-access-kfglc\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.124616 kubelet[2677]: I0421 10:31:02.121821 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1b4388b-4662-4431-9481-53f95c0e0fb7-cni-net-dir\") pod \"calico-node-qtpqf\" (UID: \"d1b4388b-4662-4431-9481-53f95c0e0fb7\") " pod="calico-system/calico-node-qtpqf" Apr 21 10:31:02.151990 kubelet[2677]: E0421 10:31:02.151925 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:02.223808 kubelet[2677]: I0421 10:31:02.223031 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/452bb3ff-3a51-4e73-8032-feb90475c95f-registration-dir\") pod \"csi-node-driver-78vjk\" (UID: \"452bb3ff-3a51-4e73-8032-feb90475c95f\") " pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:02.223808 kubelet[2677]: I0421 10:31:02.223075 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthzq\" (UniqueName: \"kubernetes.io/projected/452bb3ff-3a51-4e73-8032-feb90475c95f-kube-api-access-zthzq\") pod \"csi-node-driver-78vjk\" (UID: \"452bb3ff-3a51-4e73-8032-feb90475c95f\") " pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:02.223808 kubelet[2677]: I0421 10:31:02.223261 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/452bb3ff-3a51-4e73-8032-feb90475c95f-kubelet-dir\") pod \"csi-node-driver-78vjk\" (UID: \"452bb3ff-3a51-4e73-8032-feb90475c95f\") " pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:02.223808 kubelet[2677]: I0421 10:31:02.223290 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/452bb3ff-3a51-4e73-8032-feb90475c95f-socket-dir\") pod \"csi-node-driver-78vjk\" (UID: \"452bb3ff-3a51-4e73-8032-feb90475c95f\") " pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:02.223808 kubelet[2677]: I0421 10:31:02.223305 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/452bb3ff-3a51-4e73-8032-feb90475c95f-varrun\") pod \"csi-node-driver-78vjk\" (UID: \"452bb3ff-3a51-4e73-8032-feb90475c95f\") " pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:02.227155 kubelet[2677]: E0421 10:31:02.227011 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.227155 kubelet[2677]: W0421 10:31:02.227061 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.227379 kubelet[2677]: E0421 10:31:02.227368 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.232766 kubelet[2677]: E0421 10:31:02.232694 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.232766 kubelet[2677]: W0421 10:31:02.232726 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.232766 kubelet[2677]: E0421 10:31:02.232762 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.234399 kubelet[2677]: E0421 10:31:02.234365 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.234399 kubelet[2677]: W0421 10:31:02.234389 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.234399 kubelet[2677]: E0421 10:31:02.234398 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.234607 kubelet[2677]: E0421 10:31:02.234594 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.234607 kubelet[2677]: W0421 10:31:02.234605 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.234645 kubelet[2677]: E0421 10:31:02.234612 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.293016 kubelet[2677]: E0421 10:31:02.292975 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:02.293987 containerd[1584]: time="2026-04-21T10:31:02.293948359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b89c5f944-vfx7f,Uid:9c8907ac-1323-47de-82c7-498eab817b69,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:02.314483 containerd[1584]: time="2026-04-21T10:31:02.314408217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:02.314483 containerd[1584]: time="2026-04-21T10:31:02.314456071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:02.314483 containerd[1584]: time="2026-04-21T10:31:02.314465949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:02.314595 containerd[1584]: time="2026-04-21T10:31:02.314525433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:02.324424 kubelet[2677]: E0421 10:31:02.324403 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.324424 kubelet[2677]: W0421 10:31:02.324423 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.324518 kubelet[2677]: E0421 10:31:02.324439 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.324641 kubelet[2677]: E0421 10:31:02.324628 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.324641 kubelet[2677]: W0421 10:31:02.324640 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.324675 kubelet[2677]: E0421 10:31:02.324648 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.324881 kubelet[2677]: E0421 10:31:02.324862 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.324909 kubelet[2677]: W0421 10:31:02.324882 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.324909 kubelet[2677]: E0421 10:31:02.324896 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.325187 kubelet[2677]: E0421 10:31:02.325158 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.325187 kubelet[2677]: W0421 10:31:02.325177 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.325187 kubelet[2677]: E0421 10:31:02.325186 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.325404 kubelet[2677]: E0421 10:31:02.325378 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.325404 kubelet[2677]: W0421 10:31:02.325396 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.325442 kubelet[2677]: E0421 10:31:02.325405 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.325615 kubelet[2677]: E0421 10:31:02.325591 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.325615 kubelet[2677]: W0421 10:31:02.325607 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.325645 kubelet[2677]: E0421 10:31:02.325616 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.325771 kubelet[2677]: E0421 10:31:02.325760 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.325793 kubelet[2677]: W0421 10:31:02.325770 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.325793 kubelet[2677]: E0421 10:31:02.325776 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.325905 kubelet[2677]: E0421 10:31:02.325894 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.325905 kubelet[2677]: W0421 10:31:02.325904 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.325956 kubelet[2677]: E0421 10:31:02.325910 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.326041 kubelet[2677]: E0421 10:31:02.326030 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.326041 kubelet[2677]: W0421 10:31:02.326040 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.326073 kubelet[2677]: E0421 10:31:02.326045 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.326204 kubelet[2677]: E0421 10:31:02.326192 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.326204 kubelet[2677]: W0421 10:31:02.326203 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.326239 kubelet[2677]: E0421 10:31:02.326209 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.326336 kubelet[2677]: E0421 10:31:02.326325 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.326336 kubelet[2677]: W0421 10:31:02.326335 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.326372 kubelet[2677]: E0421 10:31:02.326340 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.326489 kubelet[2677]: E0421 10:31:02.326478 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.326489 kubelet[2677]: W0421 10:31:02.326488 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.326520 kubelet[2677]: E0421 10:31:02.326493 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.326615 kubelet[2677]: E0421 10:31:02.326604 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.326615 kubelet[2677]: W0421 10:31:02.326614 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.326647 kubelet[2677]: E0421 10:31:02.326621 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.326771 kubelet[2677]: E0421 10:31:02.326760 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.326771 kubelet[2677]: W0421 10:31:02.326766 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.326806 kubelet[2677]: E0421 10:31:02.326771 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.328228 kubelet[2677]: E0421 10:31:02.328200 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.328228 kubelet[2677]: W0421 10:31:02.328220 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.328279 kubelet[2677]: E0421 10:31:02.328229 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.328436 kubelet[2677]: E0421 10:31:02.328411 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.328436 kubelet[2677]: W0421 10:31:02.328426 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.328436 kubelet[2677]: E0421 10:31:02.328435 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.328767 kubelet[2677]: E0421 10:31:02.328755 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.328767 kubelet[2677]: W0421 10:31:02.328765 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.328809 kubelet[2677]: E0421 10:31:02.328773 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.328991 kubelet[2677]: E0421 10:31:02.328967 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.328991 kubelet[2677]: W0421 10:31:02.328983 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.329030 kubelet[2677]: E0421 10:31:02.328990 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.329176 kubelet[2677]: E0421 10:31:02.329165 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.329176 kubelet[2677]: W0421 10:31:02.329172 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.329176 kubelet[2677]: E0421 10:31:02.329178 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.330176 kubelet[2677]: E0421 10:31:02.329699 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.330176 kubelet[2677]: W0421 10:31:02.329708 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.330176 kubelet[2677]: E0421 10:31:02.329716 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.330176 kubelet[2677]: E0421 10:31:02.330033 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.330176 kubelet[2677]: W0421 10:31:02.330040 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.330176 kubelet[2677]: E0421 10:31:02.330047 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.331126 kubelet[2677]: E0421 10:31:02.331106 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.331126 kubelet[2677]: W0421 10:31:02.331126 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.331216 kubelet[2677]: E0421 10:31:02.331139 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.331478 kubelet[2677]: E0421 10:31:02.331421 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.331478 kubelet[2677]: W0421 10:31:02.331431 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.331478 kubelet[2677]: E0421 10:31:02.331440 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.331779 kubelet[2677]: E0421 10:31:02.331697 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.331779 kubelet[2677]: W0421 10:31:02.331704 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.331779 kubelet[2677]: E0421 10:31:02.331710 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.331960 kubelet[2677]: E0421 10:31:02.331911 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.331960 kubelet[2677]: W0421 10:31:02.331917 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.331960 kubelet[2677]: E0421 10:31:02.331922 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.339761 kubelet[2677]: E0421 10:31:02.337324 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:02.339761 kubelet[2677]: W0421 10:31:02.337333 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:02.339761 kubelet[2677]: E0421 10:31:02.337340 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:02.339846 containerd[1584]: time="2026-04-21T10:31:02.338678716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qtpqf,Uid:d1b4388b-4662-4431-9481-53f95c0e0fb7,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:02.360681 containerd[1584]: time="2026-04-21T10:31:02.360560802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:02.360681 containerd[1584]: time="2026-04-21T10:31:02.360611372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:02.360681 containerd[1584]: time="2026-04-21T10:31:02.360622198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:02.360985 containerd[1584]: time="2026-04-21T10:31:02.360959359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b89c5f944-vfx7f,Uid:9c8907ac-1323-47de-82c7-498eab817b69,Namespace:calico-system,Attempt:0,} returns sandbox id \"699b50fb3391018a64330206685f9a59f19b4eed5d82c9290ec48f2adf07c225\"" Apr 21 10:31:02.361622 kubelet[2677]: E0421 10:31:02.361591 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:02.361933 containerd[1584]: time="2026-04-21T10:31:02.361889581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:02.362214 containerd[1584]: time="2026-04-21T10:31:02.362193125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:31:02.401018 containerd[1584]: time="2026-04-21T10:31:02.400884373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qtpqf,Uid:d1b4388b-4662-4431-9481-53f95c0e0fb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\"" Apr 21 10:31:03.536588 kubelet[2677]: E0421 10:31:03.536542 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:04.403081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169414470.mount: Deactivated successfully. Apr 21 10:31:05.302998 containerd[1584]: time="2026-04-21T10:31:05.302938561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:05.303500 containerd[1584]: time="2026-04-21T10:31:05.303458410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:31:05.304316 containerd[1584]: time="2026-04-21T10:31:05.304282762Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:05.306536 containerd[1584]: time="2026-04-21T10:31:05.306491220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:05.307058 containerd[1584]: time="2026-04-21T10:31:05.307025206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.944800516s" Apr 21 10:31:05.307096 containerd[1584]: time="2026-04-21T10:31:05.307057073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:31:05.307980 containerd[1584]: time="2026-04-21T10:31:05.307957474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:31:05.321955 containerd[1584]: time="2026-04-21T10:31:05.321921983Z" level=info msg="CreateContainer within sandbox \"699b50fb3391018a64330206685f9a59f19b4eed5d82c9290ec48f2adf07c225\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:31:05.332917 containerd[1584]: time="2026-04-21T10:31:05.332888262Z" level=info msg="CreateContainer within sandbox \"699b50fb3391018a64330206685f9a59f19b4eed5d82c9290ec48f2adf07c225\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a31c19fc5dc2f0974d4491a41b4006c4743c73045998064f848432275b7d6fdc\"" Apr 21 10:31:05.333227 containerd[1584]: time="2026-04-21T10:31:05.333206966Z" level=info msg="StartContainer for \"a31c19fc5dc2f0974d4491a41b4006c4743c73045998064f848432275b7d6fdc\"" Apr 21 10:31:05.394721 containerd[1584]: time="2026-04-21T10:31:05.394688549Z" level=info msg="StartContainer for \"a31c19fc5dc2f0974d4491a41b4006c4743c73045998064f848432275b7d6fdc\" returns successfully" Apr 21 10:31:05.536894 kubelet[2677]: E0421 10:31:05.536731 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:05.661514 kubelet[2677]: E0421 10:31:05.661379 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:05.675097 kubelet[2677]: I0421 10:31:05.674979 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b89c5f944-vfx7f" podStartSLOduration=1.729194627 podStartE2EDuration="4.674963879s" podCreationTimestamp="2026-04-21 10:31:01 +0000 UTC" firstStartedPulling="2026-04-21 10:31:02.361994867 +0000 UTC m=+15.905377248" lastFinishedPulling="2026-04-21 10:31:05.307764117 +0000 UTC m=+18.851146500" observedRunningTime="2026-04-21 10:31:05.67490078 +0000 UTC m=+19.218283163" watchObservedRunningTime="2026-04-21 10:31:05.674963879 +0000 UTC m=+19.218346262" Apr 21 10:31:05.686444 kubelet[2677]: E0421 10:31:05.686405 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.686444 kubelet[2677]: W0421 10:31:05.686428 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.686444 kubelet[2677]: E0421 10:31:05.686445 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.686667 kubelet[2677]: E0421 10:31:05.686636 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.686667 kubelet[2677]: W0421 10:31:05.686652 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.686667 kubelet[2677]: E0421 10:31:05.686659 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.686898 kubelet[2677]: E0421 10:31:05.686862 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.686898 kubelet[2677]: W0421 10:31:05.686878 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.686898 kubelet[2677]: E0421 10:31:05.686884 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.687070 kubelet[2677]: E0421 10:31:05.687045 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.687070 kubelet[2677]: W0421 10:31:05.687057 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.687070 kubelet[2677]: E0421 10:31:05.687062 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.687219 kubelet[2677]: E0421 10:31:05.687207 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.687219 kubelet[2677]: W0421 10:31:05.687216 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.687260 kubelet[2677]: E0421 10:31:05.687221 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.687354 kubelet[2677]: E0421 10:31:05.687342 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.687354 kubelet[2677]: W0421 10:31:05.687351 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.687395 kubelet[2677]: E0421 10:31:05.687356 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.687487 kubelet[2677]: E0421 10:31:05.687475 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.687487 kubelet[2677]: W0421 10:31:05.687485 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.687528 kubelet[2677]: E0421 10:31:05.687490 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.687621 kubelet[2677]: E0421 10:31:05.687609 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.687621 kubelet[2677]: W0421 10:31:05.687620 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.687662 kubelet[2677]: E0421 10:31:05.687625 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.687817 kubelet[2677]: E0421 10:31:05.687805 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.687817 kubelet[2677]: W0421 10:31:05.687816 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.687865 kubelet[2677]: E0421 10:31:05.687821 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.688007 kubelet[2677]: E0421 10:31:05.687995 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.688007 kubelet[2677]: W0421 10:31:05.688005 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.688042 kubelet[2677]: E0421 10:31:05.688010 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.688199 kubelet[2677]: E0421 10:31:05.688182 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.688216 kubelet[2677]: W0421 10:31:05.688199 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.688216 kubelet[2677]: E0421 10:31:05.688211 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.688442 kubelet[2677]: E0421 10:31:05.688418 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.688442 kubelet[2677]: W0421 10:31:05.688432 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.688442 kubelet[2677]: E0421 10:31:05.688438 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.688592 kubelet[2677]: E0421 10:31:05.688575 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.688624 kubelet[2677]: W0421 10:31:05.688610 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.688624 kubelet[2677]: E0421 10:31:05.688621 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.688787 kubelet[2677]: E0421 10:31:05.688776 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.688787 kubelet[2677]: W0421 10:31:05.688786 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.688827 kubelet[2677]: E0421 10:31:05.688790 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.688922 kubelet[2677]: E0421 10:31:05.688911 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.688922 kubelet[2677]: W0421 10:31:05.688921 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.688958 kubelet[2677]: E0421 10:31:05.688926 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.752962 kubelet[2677]: E0421 10:31:05.752936 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.752962 kubelet[2677]: W0421 10:31:05.752956 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.753108 kubelet[2677]: E0421 10:31:05.752972 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.753270 kubelet[2677]: E0421 10:31:05.753254 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.753290 kubelet[2677]: W0421 10:31:05.753270 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.753290 kubelet[2677]: E0421 10:31:05.753281 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.753578 kubelet[2677]: E0421 10:31:05.753542 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.753599 kubelet[2677]: W0421 10:31:05.753578 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.753599 kubelet[2677]: E0421 10:31:05.753589 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.753788 kubelet[2677]: E0421 10:31:05.753777 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.753788 kubelet[2677]: W0421 10:31:05.753787 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.754333 kubelet[2677]: E0421 10:31:05.753794 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.754333 kubelet[2677]: E0421 10:31:05.753945 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.754333 kubelet[2677]: W0421 10:31:05.753950 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.754333 kubelet[2677]: E0421 10:31:05.753955 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.754333 kubelet[2677]: E0421 10:31:05.754210 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.754333 kubelet[2677]: W0421 10:31:05.754217 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.754333 kubelet[2677]: E0421 10:31:05.754225 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.754529 kubelet[2677]: E0421 10:31:05.754511 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.754529 kubelet[2677]: W0421 10:31:05.754525 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.754611 kubelet[2677]: E0421 10:31:05.754535 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.754779 kubelet[2677]: E0421 10:31:05.754766 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.754779 kubelet[2677]: W0421 10:31:05.754777 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.754859 kubelet[2677]: E0421 10:31:05.754784 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.754967 kubelet[2677]: E0421 10:31:05.754955 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.754967 kubelet[2677]: W0421 10:31:05.754966 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.754999 kubelet[2677]: E0421 10:31:05.754972 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.755138 kubelet[2677]: E0421 10:31:05.755127 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.755138 kubelet[2677]: W0421 10:31:05.755137 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.755181 kubelet[2677]: E0421 10:31:05.755142 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.755308 kubelet[2677]: E0421 10:31:05.755296 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.755308 kubelet[2677]: W0421 10:31:05.755306 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.755342 kubelet[2677]: E0421 10:31:05.755311 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.755545 kubelet[2677]: E0421 10:31:05.755531 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.755564 kubelet[2677]: W0421 10:31:05.755546 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.755564 kubelet[2677]: E0421 10:31:05.755558 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.755769 kubelet[2677]: E0421 10:31:05.755731 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.755769 kubelet[2677]: W0421 10:31:05.755768 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.755808 kubelet[2677]: E0421 10:31:05.755775 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.755988 kubelet[2677]: E0421 10:31:05.755975 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.756010 kubelet[2677]: W0421 10:31:05.755987 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.756010 kubelet[2677]: E0421 10:31:05.755996 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.756217 kubelet[2677]: E0421 10:31:05.756203 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.756239 kubelet[2677]: W0421 10:31:05.756219 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.756239 kubelet[2677]: E0421 10:31:05.756228 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.756433 kubelet[2677]: E0421 10:31:05.756420 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.756453 kubelet[2677]: W0421 10:31:05.756433 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.756453 kubelet[2677]: E0421 10:31:05.756442 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.756715 kubelet[2677]: E0421 10:31:05.756686 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.756715 kubelet[2677]: W0421 10:31:05.756702 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.756715 kubelet[2677]: E0421 10:31:05.756710 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:05.756958 kubelet[2677]: E0421 10:31:05.756937 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:05.756976 kubelet[2677]: W0421 10:31:05.756971 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:05.756993 kubelet[2677]: E0421 10:31:05.756980 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.662586 kubelet[2677]: I0421 10:31:06.662521 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:31:06.662937 kubelet[2677]: E0421 10:31:06.662839 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:06.696506 kubelet[2677]: E0421 10:31:06.696448 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.696506 kubelet[2677]: W0421 10:31:06.696474 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.696506 kubelet[2677]: E0421 10:31:06.696520 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.696802 kubelet[2677]: E0421 10:31:06.696788 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.696802 kubelet[2677]: W0421 10:31:06.696801 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.696861 kubelet[2677]: E0421 10:31:06.696810 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.697011 kubelet[2677]: E0421 10:31:06.696982 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.697011 kubelet[2677]: W0421 10:31:06.696996 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.697011 kubelet[2677]: E0421 10:31:06.697002 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.697218 kubelet[2677]: E0421 10:31:06.697194 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.697218 kubelet[2677]: W0421 10:31:06.697207 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.697218 kubelet[2677]: E0421 10:31:06.697212 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.697409 kubelet[2677]: E0421 10:31:06.697391 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.697409 kubelet[2677]: W0421 10:31:06.697402 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.697409 kubelet[2677]: E0421 10:31:06.697407 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.697601 kubelet[2677]: E0421 10:31:06.697589 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.697601 kubelet[2677]: W0421 10:31:06.697599 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.697679 kubelet[2677]: E0421 10:31:06.697605 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.697776 kubelet[2677]: E0421 10:31:06.697765 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.697794 kubelet[2677]: W0421 10:31:06.697777 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.697794 kubelet[2677]: E0421 10:31:06.697783 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.697912 kubelet[2677]: E0421 10:31:06.697898 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.697912 kubelet[2677]: W0421 10:31:06.697908 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.697912 kubelet[2677]: E0421 10:31:06.697913 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.698053 kubelet[2677]: E0421 10:31:06.698040 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.698053 kubelet[2677]: W0421 10:31:06.698050 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.698106 kubelet[2677]: E0421 10:31:06.698055 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.698248 kubelet[2677]: E0421 10:31:06.698236 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.698248 kubelet[2677]: W0421 10:31:06.698245 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.698292 kubelet[2677]: E0421 10:31:06.698251 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.698383 kubelet[2677]: E0421 10:31:06.698372 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.698383 kubelet[2677]: W0421 10:31:06.698381 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.698424 kubelet[2677]: E0421 10:31:06.698386 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.698515 kubelet[2677]: E0421 10:31:06.698503 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.698515 kubelet[2677]: W0421 10:31:06.698513 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.698555 kubelet[2677]: E0421 10:31:06.698518 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.698657 kubelet[2677]: E0421 10:31:06.698645 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.698657 kubelet[2677]: W0421 10:31:06.698655 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.698696 kubelet[2677]: E0421 10:31:06.698659 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.698860 kubelet[2677]: E0421 10:31:06.698843 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.698881 kubelet[2677]: W0421 10:31:06.698860 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.698900 kubelet[2677]: E0421 10:31:06.698872 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.699110 kubelet[2677]: E0421 10:31:06.699056 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.699110 kubelet[2677]: W0421 10:31:06.699097 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.699110 kubelet[2677]: E0421 10:31:06.699104 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.762116 kubelet[2677]: E0421 10:31:06.762029 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.762116 kubelet[2677]: W0421 10:31:06.762055 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.762116 kubelet[2677]: E0421 10:31:06.762105 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.762418 kubelet[2677]: E0421 10:31:06.762366 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.762418 kubelet[2677]: W0421 10:31:06.762387 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.762418 kubelet[2677]: E0421 10:31:06.762401 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.762626 kubelet[2677]: E0421 10:31:06.762609 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.762626 kubelet[2677]: W0421 10:31:06.762623 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.762665 kubelet[2677]: E0421 10:31:06.762630 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.762900 kubelet[2677]: E0421 10:31:06.762869 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.762900 kubelet[2677]: W0421 10:31:06.762888 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.762900 kubelet[2677]: E0421 10:31:06.762899 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.763094 kubelet[2677]: E0421 10:31:06.763066 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.763094 kubelet[2677]: W0421 10:31:06.763089 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.763130 kubelet[2677]: E0421 10:31:06.763096 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.763257 kubelet[2677]: E0421 10:31:06.763246 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.763257 kubelet[2677]: W0421 10:31:06.763256 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.763290 kubelet[2677]: E0421 10:31:06.763261 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.763467 kubelet[2677]: E0421 10:31:06.763456 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.763467 kubelet[2677]: W0421 10:31:06.763466 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.763499 kubelet[2677]: E0421 10:31:06.763472 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.763767 kubelet[2677]: E0421 10:31:06.763754 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.763785 kubelet[2677]: W0421 10:31:06.763767 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.763785 kubelet[2677]: E0421 10:31:06.763775 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.763938 kubelet[2677]: E0421 10:31:06.763927 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.763938 kubelet[2677]: W0421 10:31:06.763937 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.763969 kubelet[2677]: E0421 10:31:06.763943 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.764111 kubelet[2677]: E0421 10:31:06.764099 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.764111 kubelet[2677]: W0421 10:31:06.764109 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.764141 kubelet[2677]: E0421 10:31:06.764115 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.764263 kubelet[2677]: E0421 10:31:06.764252 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.764263 kubelet[2677]: W0421 10:31:06.764262 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.764294 kubelet[2677]: E0421 10:31:06.764267 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.764421 kubelet[2677]: E0421 10:31:06.764410 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.764421 kubelet[2677]: W0421 10:31:06.764420 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.764456 kubelet[2677]: E0421 10:31:06.764426 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.764598 kubelet[2677]: E0421 10:31:06.764587 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.764598 kubelet[2677]: W0421 10:31:06.764597 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.764636 kubelet[2677]: E0421 10:31:06.764607 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.764879 kubelet[2677]: E0421 10:31:06.764864 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.764901 kubelet[2677]: W0421 10:31:06.764879 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.764901 kubelet[2677]: E0421 10:31:06.764889 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.765068 kubelet[2677]: E0421 10:31:06.765057 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.765068 kubelet[2677]: W0421 10:31:06.765067 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.765115 kubelet[2677]: E0421 10:31:06.765073 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.765277 kubelet[2677]: E0421 10:31:06.765262 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.765277 kubelet[2677]: W0421 10:31:06.765274 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.765277 kubelet[2677]: E0421 10:31:06.765281 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.765575 kubelet[2677]: E0421 10:31:06.765561 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.765575 kubelet[2677]: W0421 10:31:06.765574 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.765642 kubelet[2677]: E0421 10:31:06.765580 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:06.765782 kubelet[2677]: E0421 10:31:06.765764 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:31:06.765782 kubelet[2677]: W0421 10:31:06.765778 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:31:06.765843 kubelet[2677]: E0421 10:31:06.765787 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:31:07.045457 containerd[1584]: time="2026-04-21T10:31:07.045406595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:07.046127 containerd[1584]: time="2026-04-21T10:31:07.046066767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:31:07.046975 containerd[1584]: time="2026-04-21T10:31:07.046955466Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:07.048761 containerd[1584]: time="2026-04-21T10:31:07.048703784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:07.049171 containerd[1584]: time="2026-04-21T10:31:07.049117232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.741127768s" Apr 21 10:31:07.049171 containerd[1584]: time="2026-04-21T10:31:07.049158733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:31:07.053135 containerd[1584]: time="2026-04-21T10:31:07.053111780Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:31:07.063669 containerd[1584]: time="2026-04-21T10:31:07.063623334Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db4cb3d68339a82273bd69e306ac7c171bca3570b4e1f9805d7ef0accb774a91\"" Apr 21 10:31:07.064020 containerd[1584]: time="2026-04-21T10:31:07.063942814Z" level=info msg="StartContainer for \"db4cb3d68339a82273bd69e306ac7c171bca3570b4e1f9805d7ef0accb774a91\"" Apr 21 10:31:07.109345 containerd[1584]: time="2026-04-21T10:31:07.109318607Z" level=info msg="StartContainer for \"db4cb3d68339a82273bd69e306ac7c171bca3570b4e1f9805d7ef0accb774a91\" returns successfully" Apr 21 10:31:07.214823 containerd[1584]: time="2026-04-21T10:31:07.214727364Z" level=info msg="shim disconnected" id=db4cb3d68339a82273bd69e306ac7c171bca3570b4e1f9805d7ef0accb774a91 namespace=k8s.io Apr 21 10:31:07.214823 containerd[1584]: time="2026-04-21T10:31:07.214813218Z" level=warning msg="cleaning up after shim disconnected" id=db4cb3d68339a82273bd69e306ac7c171bca3570b4e1f9805d7ef0accb774a91 namespace=k8s.io Apr 21 10:31:07.214823 containerd[1584]: time="2026-04-21T10:31:07.214821873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:31:07.536832 kubelet[2677]: E0421 10:31:07.536585 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:07.666417 containerd[1584]: time="2026-04-21T10:31:07.666356078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:31:08.060849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4cb3d68339a82273bd69e306ac7c171bca3570b4e1f9805d7ef0accb774a91-rootfs.mount: Deactivated successfully. Apr 21 10:31:08.631867 update_engine[1571]: I20260421 10:31:08.631763 1571 update_attempter.cc:509] Updating boot flags... Apr 21 10:31:08.652704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3411) Apr 21 10:31:08.667781 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3409) Apr 21 10:31:08.687840 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3409) Apr 21 10:31:09.535973 kubelet[2677]: E0421 10:31:09.535810 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:11.536396 kubelet[2677]: E0421 10:31:11.536284 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:13.538008 kubelet[2677]: E0421 10:31:13.537629 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:14.303340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353260505.mount: Deactivated successfully. Apr 21 10:31:14.429862 containerd[1584]: time="2026-04-21T10:31:14.429779653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:14.430318 containerd[1584]: time="2026-04-21T10:31:14.430280058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:31:14.436385 containerd[1584]: time="2026-04-21T10:31:14.436345101Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:14.436924 containerd[1584]: time="2026-04-21T10:31:14.436895466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.770512801s" Apr 21 10:31:14.437008 containerd[1584]: time="2026-04-21T10:31:14.436926508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:31:14.437344 containerd[1584]: time="2026-04-21T10:31:14.437305359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:14.440413 containerd[1584]: time="2026-04-21T10:31:14.440389585Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:31:14.561045 containerd[1584]: time="2026-04-21T10:31:14.560909800Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"0cffa26cd36a501391270c5a93554c50a2f807c967c593352cb9f20c01760796\"" Apr 21 10:31:14.562376 containerd[1584]: time="2026-04-21T10:31:14.561879703Z" level=info msg="StartContainer for \"0cffa26cd36a501391270c5a93554c50a2f807c967c593352cb9f20c01760796\"" Apr 21 10:31:14.629234 containerd[1584]: time="2026-04-21T10:31:14.629180901Z" level=info msg="StartContainer for \"0cffa26cd36a501391270c5a93554c50a2f807c967c593352cb9f20c01760796\" returns successfully" Apr 21 10:31:14.685390 containerd[1584]: time="2026-04-21T10:31:14.685326726Z" level=info msg="shim disconnected" id=0cffa26cd36a501391270c5a93554c50a2f807c967c593352cb9f20c01760796 namespace=k8s.io Apr 21 10:31:14.685390 containerd[1584]: time="2026-04-21T10:31:14.685376552Z" level=warning msg="cleaning up after shim disconnected" id=0cffa26cd36a501391270c5a93554c50a2f807c967c593352cb9f20c01760796 namespace=k8s.io Apr 21 10:31:14.685390 containerd[1584]: time="2026-04-21T10:31:14.685383355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:31:15.303541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cffa26cd36a501391270c5a93554c50a2f807c967c593352cb9f20c01760796-rootfs.mount: Deactivated successfully. Apr 21 10:31:15.536388 kubelet[2677]: E0421 10:31:15.536324 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:15.698586 containerd[1584]: time="2026-04-21T10:31:15.698555495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:31:17.537303 kubelet[2677]: E0421 10:31:17.537198 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:19.536926 kubelet[2677]: E0421 10:31:19.536854 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:21.536840 kubelet[2677]: E0421 10:31:21.536712 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:21.955767 kubelet[2677]: I0421 10:31:21.955715 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:31:21.956072 kubelet[2677]: E0421 10:31:21.956047 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:22.714626 kubelet[2677]: E0421 10:31:22.714481 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:23.536575 kubelet[2677]: E0421 10:31:23.536459 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:25.536255 kubelet[2677]: E0421 10:31:25.536200 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:26.240059 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:45124.service - OpenSSH per-connection server daemon (10.0.0.1:45124). Apr 21 10:31:26.275531 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 45124 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:26.276803 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:26.280141 systemd-logind[1562]: New session 8 of user core. Apr 21 10:31:26.287958 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:31:26.404147 sshd[3497]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:26.406629 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:45124.service: Deactivated successfully. Apr 21 10:31:26.409478 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:31:26.409633 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:31:26.411183 systemd-logind[1562]: Removed session 8. Apr 21 10:31:26.989417 containerd[1584]: time="2026-04-21T10:31:26.989362071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:26.990081 containerd[1584]: time="2026-04-21T10:31:26.990038436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:31:26.990767 containerd[1584]: time="2026-04-21T10:31:26.990712909Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:26.995320 containerd[1584]: time="2026-04-21T10:31:26.995276503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:26.996009 containerd[1584]: time="2026-04-21T10:31:26.995983623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 11.297395801s" Apr 21 10:31:26.996040 containerd[1584]: time="2026-04-21T10:31:26.996013173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:31:26.999990 containerd[1584]: time="2026-04-21T10:31:26.999962594Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:31:27.011538 containerd[1584]: time="2026-04-21T10:31:27.011493564Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"89f22fec853ace74008216a2647b2ce63d4181f4077a2f3ca65c6c364a20fccd\"" Apr 21 10:31:27.012083 containerd[1584]: time="2026-04-21T10:31:27.012058592Z" level=info msg="StartContainer for \"89f22fec853ace74008216a2647b2ce63d4181f4077a2f3ca65c6c364a20fccd\"" Apr 21 10:31:27.083241 containerd[1584]: time="2026-04-21T10:31:27.083172004Z" level=info msg="StartContainer for \"89f22fec853ace74008216a2647b2ce63d4181f4077a2f3ca65c6c364a20fccd\" returns successfully" Apr 21 10:31:27.488953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89f22fec853ace74008216a2647b2ce63d4181f4077a2f3ca65c6c364a20fccd-rootfs.mount: Deactivated successfully. Apr 21 10:31:27.494496 containerd[1584]: time="2026-04-21T10:31:27.494297226Z" level=info msg="shim disconnected" id=89f22fec853ace74008216a2647b2ce63d4181f4077a2f3ca65c6c364a20fccd namespace=k8s.io Apr 21 10:31:27.494496 containerd[1584]: time="2026-04-21T10:31:27.494344764Z" level=warning msg="cleaning up after shim disconnected" id=89f22fec853ace74008216a2647b2ce63d4181f4077a2f3ca65c6c364a20fccd namespace=k8s.io Apr 21 10:31:27.494496 containerd[1584]: time="2026-04-21T10:31:27.494351203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:31:27.533356 kubelet[2677]: I0421 10:31:27.533292 2677 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:31:27.538275 containerd[1584]: time="2026-04-21T10:31:27.538187895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78vjk,Uid:452bb3ff-3a51-4e73-8032-feb90475c95f,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:27.639808 containerd[1584]: time="2026-04-21T10:31:27.639731331Z" level=error msg="Failed to destroy network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:27.640049 containerd[1584]: time="2026-04-21T10:31:27.640008848Z" level=error msg="encountered an error cleaning up failed sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:27.640084 containerd[1584]: time="2026-04-21T10:31:27.640059293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78vjk,Uid:452bb3ff-3a51-4e73-8032-feb90475c95f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:27.645499 kubelet[2677]: E0421 10:31:27.645467 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:27.645595 kubelet[2677]: E0421 10:31:27.645519 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:27.645595 kubelet[2677]: E0421 10:31:27.645538 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78vjk" Apr 21 10:31:27.645640 kubelet[2677]: E0421 10:31:27.645590 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-78vjk_calico-system(452bb3ff-3a51-4e73-8032-feb90475c95f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-78vjk_calico-system(452bb3ff-3a51-4e73-8032-feb90475c95f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:27.724691 kubelet[2677]: I0421 10:31:27.724635 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:27.738913 kubelet[2677]: I0421 10:31:27.738878 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8131e61-a17e-4cca-839f-e8ca9415fa72-config-volume\") pod \"coredns-674b8bbfcf-fjjgs\" (UID: \"b8131e61-a17e-4cca-839f-e8ca9415fa72\") " pod="kube-system/coredns-674b8bbfcf-fjjgs" Apr 21 10:31:27.739075 kubelet[2677]: I0421 10:31:27.738988 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27e19655-f488-4a2a-bd85-641f0d0be96a-config-volume\") pod \"coredns-674b8bbfcf-bz8j8\" (UID: \"27e19655-f488-4a2a-bd85-641f0d0be96a\") " pod="kube-system/coredns-674b8bbfcf-bz8j8" Apr 21 10:31:27.739075 kubelet[2677]: I0421 10:31:27.739044 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2wx\" (UniqueName: \"kubernetes.io/projected/27e19655-f488-4a2a-bd85-641f0d0be96a-kube-api-access-ql2wx\") pod \"coredns-674b8bbfcf-bz8j8\" (UID: \"27e19655-f488-4a2a-bd85-641f0d0be96a\") " pod="kube-system/coredns-674b8bbfcf-bz8j8" Apr 21 10:31:27.739075 kubelet[2677]: I0421 10:31:27.739060 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a945fca-2767-4f37-a50e-78d8316fd74a-calico-apiserver-certs\") pod \"calico-apiserver-9d4f98f48-xl724\" (UID: \"3a945fca-2767-4f37-a50e-78d8316fd74a\") " pod="calico-system/calico-apiserver-9d4f98f48-xl724" Apr 21 10:31:27.739162 kubelet[2677]: I0421 10:31:27.739148 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwdr\" (UniqueName: \"kubernetes.io/projected/b8131e61-a17e-4cca-839f-e8ca9415fa72-kube-api-access-qhwdr\") pod \"coredns-674b8bbfcf-fjjgs\" (UID: \"b8131e61-a17e-4cca-839f-e8ca9415fa72\") " pod="kube-system/coredns-674b8bbfcf-fjjgs" Apr 21 10:31:27.739189 kubelet[2677]: I0421 10:31:27.739166 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz88s\" (UniqueName: \"kubernetes.io/projected/a1c477ff-f862-41e4-b022-427067e53a81-kube-api-access-lz88s\") pod \"calico-apiserver-9d4f98f48-rr2sp\" (UID: \"a1c477ff-f862-41e4-b022-427067e53a81\") " pod="calico-system/calico-apiserver-9d4f98f48-rr2sp" Apr 21 10:31:27.739189 kubelet[2677]: I0421 10:31:27.739178 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-nginx-config\") pod \"whisker-5d579d66fd-65qf7\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " pod="calico-system/whisker-5d579d66fd-65qf7" Apr 21 10:31:27.739245 kubelet[2677]: I0421 10:31:27.739212 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ab37260-b13a-4e6c-a73a-39b7c6b94371-config\") pod \"goldmane-5b85766d88-jlm8h\" (UID: \"7ab37260-b13a-4e6c-a73a-39b7c6b94371\") " pod="calico-system/goldmane-5b85766d88-jlm8h" Apr 21 10:31:27.739245 kubelet[2677]: I0421 10:31:27.739225 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5c6\" (UniqueName: \"kubernetes.io/projected/7ab37260-b13a-4e6c-a73a-39b7c6b94371-kube-api-access-fl5c6\") pod \"goldmane-5b85766d88-jlm8h\" (UID: \"7ab37260-b13a-4e6c-a73a-39b7c6b94371\") " pod="calico-system/goldmane-5b85766d88-jlm8h" Apr 21 10:31:27.739245 kubelet[2677]: I0421 10:31:27.739238 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1c477ff-f862-41e4-b022-427067e53a81-calico-apiserver-certs\") pod \"calico-apiserver-9d4f98f48-rr2sp\" (UID: \"a1c477ff-f862-41e4-b022-427067e53a81\") " pod="calico-system/calico-apiserver-9d4f98f48-rr2sp" Apr 21 10:31:27.739294 kubelet[2677]: I0421 10:31:27.739249 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8edab2e0-6da0-42e7-9867-a2817bcabbc7-tigera-ca-bundle\") pod \"calico-kube-controllers-c66f74b6d-g7cvz\" (UID: \"8edab2e0-6da0-42e7-9867-a2817bcabbc7\") " pod="calico-system/calico-kube-controllers-c66f74b6d-g7cvz" Apr 21 10:31:27.739294 kubelet[2677]: I0421 10:31:27.739261 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4bx\" (UniqueName: \"kubernetes.io/projected/8edab2e0-6da0-42e7-9867-a2817bcabbc7-kube-api-access-xs4bx\") pod \"calico-kube-controllers-c66f74b6d-g7cvz\" (UID: \"8edab2e0-6da0-42e7-9867-a2817bcabbc7\") " pod="calico-system/calico-kube-controllers-c66f74b6d-g7cvz" Apr 21 10:31:27.739294 kubelet[2677]: I0421 10:31:27.739274 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-backend-key-pair\") pod \"whisker-5d579d66fd-65qf7\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " pod="calico-system/whisker-5d579d66fd-65qf7" Apr 21 10:31:27.739348 kubelet[2677]: I0421 10:31:27.739299 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab37260-b13a-4e6c-a73a-39b7c6b94371-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-jlm8h\" (UID: \"7ab37260-b13a-4e6c-a73a-39b7c6b94371\") " pod="calico-system/goldmane-5b85766d88-jlm8h" Apr 21 10:31:27.739348 kubelet[2677]: I0421 10:31:27.739311 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7ab37260-b13a-4e6c-a73a-39b7c6b94371-goldmane-key-pair\") pod \"goldmane-5b85766d88-jlm8h\" (UID: \"7ab37260-b13a-4e6c-a73a-39b7c6b94371\") " pod="calico-system/goldmane-5b85766d88-jlm8h" Apr 21 10:31:27.739348 kubelet[2677]: I0421 10:31:27.739322 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w24c4\" (UniqueName: \"kubernetes.io/projected/3a945fca-2767-4f37-a50e-78d8316fd74a-kube-api-access-w24c4\") pod \"calico-apiserver-9d4f98f48-xl724\" (UID: \"3a945fca-2767-4f37-a50e-78d8316fd74a\") " pod="calico-system/calico-apiserver-9d4f98f48-xl724" Apr 21 10:31:27.739348 kubelet[2677]: I0421 10:31:27.739334 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-ca-bundle\") pod \"whisker-5d579d66fd-65qf7\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " pod="calico-system/whisker-5d579d66fd-65qf7" Apr 21 10:31:27.739348 kubelet[2677]: I0421 10:31:27.739347 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh75p\" (UniqueName: \"kubernetes.io/projected/29f602f8-9f3c-4ef2-b2a8-11d244890352-kube-api-access-nh75p\") pod \"whisker-5d579d66fd-65qf7\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " pod="calico-system/whisker-5d579d66fd-65qf7" Apr 21 10:31:27.740483 containerd[1584]: time="2026-04-21T10:31:27.740437840Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:31:27.740979 containerd[1584]: time="2026-04-21T10:31:27.740782747Z" level=info msg="StopPodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\"" Apr 21 10:31:27.741662 containerd[1584]: time="2026-04-21T10:31:27.741610221Z" level=info msg="Ensure that sandbox 41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1 in task-service has been cleanup successfully" Apr 21 10:31:27.761186 containerd[1584]: time="2026-04-21T10:31:27.761083307Z" level=info msg="CreateContainer within sandbox \"e6211fbc6ef317cd02f64e5568a5977a7839afcb71baf561d0f65d0185c8ac6a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0023d6cd995cd723ef22866aa114dcd47fd9e715923474e8b047aaa5b82a019e\"" Apr 21 10:31:27.762107 containerd[1584]: time="2026-04-21T10:31:27.762072059Z" level=info msg="StartContainer for \"0023d6cd995cd723ef22866aa114dcd47fd9e715923474e8b047aaa5b82a019e\"" Apr 21 10:31:27.783647 containerd[1584]: time="2026-04-21T10:31:27.783599777Z" level=error msg="StopPodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" failed" error="failed to destroy network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:27.783873 kubelet[2677]: E0421 10:31:27.783817 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:27.783913 kubelet[2677]: E0421 10:31:27.783873 2677 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1"} Apr 21 10:31:27.783962 kubelet[2677]: E0421 10:31:27.783913 2677 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"452bb3ff-3a51-4e73-8032-feb90475c95f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:31:27.783962 kubelet[2677]: E0421 10:31:27.783950 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"452bb3ff-3a51-4e73-8032-feb90475c95f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78vjk" podUID="452bb3ff-3a51-4e73-8032-feb90475c95f" Apr 21 10:31:27.812625 containerd[1584]: time="2026-04-21T10:31:27.812593084Z" level=info msg="StartContainer for \"0023d6cd995cd723ef22866aa114dcd47fd9e715923474e8b047aaa5b82a019e\" returns successfully" Apr 21 10:31:27.862624 kubelet[2677]: E0421 10:31:27.862598 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:27.863566 containerd[1584]: time="2026-04-21T10:31:27.863538472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fjjgs,Uid:b8131e61-a17e-4cca-839f-e8ca9415fa72,Namespace:kube-system,Attempt:0,}" Apr 21 10:31:27.863840 kubelet[2677]: E0421 10:31:27.863812 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:27.864353 containerd[1584]: time="2026-04-21T10:31:27.864302993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bz8j8,Uid:27e19655-f488-4a2a-bd85-641f0d0be96a,Namespace:kube-system,Attempt:0,}" Apr 21 10:31:27.867885 containerd[1584]: time="2026-04-21T10:31:27.867847168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c66f74b6d-g7cvz,Uid:8edab2e0-6da0-42e7-9867-a2817bcabbc7,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:27.879983 containerd[1584]: time="2026-04-21T10:31:27.879947753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-rr2sp,Uid:a1c477ff-f862-41e4-b022-427067e53a81,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:27.879983 containerd[1584]: time="2026-04-21T10:31:27.879979164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-xl724,Uid:3a945fca-2767-4f37-a50e-78d8316fd74a,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:27.882606 containerd[1584]: time="2026-04-21T10:31:27.882568220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-jlm8h,Uid:7ab37260-b13a-4e6c-a73a-39b7c6b94371,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:27.887483 containerd[1584]: time="2026-04-21T10:31:27.887442159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d579d66fd-65qf7,Uid:29f602f8-9f3c-4ef2-b2a8-11d244890352,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:28.013906 containerd[1584]: time="2026-04-21T10:31:28.011284218Z" level=error msg="Failed to destroy network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.013906 containerd[1584]: time="2026-04-21T10:31:28.011589120Z" level=error msg="encountered an error cleaning up failed sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.013906 containerd[1584]: time="2026-04-21T10:31:28.011625144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fjjgs,Uid:b8131e61-a17e-4cca-839f-e8ca9415fa72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.015559 kubelet[2677]: E0421 10:31:28.014169 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.015559 kubelet[2677]: E0421 10:31:28.014239 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fjjgs" Apr 21 10:31:28.015559 kubelet[2677]: E0421 10:31:28.014267 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fjjgs" Apr 21 10:31:28.015778 kubelet[2677]: E0421 10:31:28.014322 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fjjgs_kube-system(b8131e61-a17e-4cca-839f-e8ca9415fa72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fjjgs_kube-system(b8131e61-a17e-4cca-839f-e8ca9415fa72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fjjgs" podUID="b8131e61-a17e-4cca-839f-e8ca9415fa72" Apr 21 10:31:28.033891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d-shm.mount: Deactivated successfully. Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.134 [INFO][3867] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.136 [INFO][3867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" iface="eth0" netns="/var/run/netns/cni-61ff891d-9a0d-9b27-97fc-db89d3b8bf2d" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.137 [INFO][3867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" iface="eth0" netns="/var/run/netns/cni-61ff891d-9a0d-9b27-97fc-db89d3b8bf2d" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.137 [INFO][3867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" iface="eth0" netns="/var/run/netns/cni-61ff891d-9a0d-9b27-97fc-db89d3b8bf2d" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.137 [INFO][3867] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.137 [INFO][3867] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.187 [INFO][3918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" HandleID="k8s-pod-network.fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Workload="localhost-k8s-whisker--5d579d66fd--65qf7-eth0" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.187 [INFO][3918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.187 [INFO][3918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.196 [WARNING][3918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" HandleID="k8s-pod-network.fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Workload="localhost-k8s-whisker--5d579d66fd--65qf7-eth0" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.196 [INFO][3918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" HandleID="k8s-pod-network.fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Workload="localhost-k8s-whisker--5d579d66fd--65qf7-eth0" Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.198 [INFO][3918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.204066 containerd[1584]: 2026-04-21 10:31:28.200 [INFO][3867] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1" Apr 21 10:31:28.207326 systemd[1]: run-netns-cni\x2d61ff891d\x2d9a0d\x2d9b27\x2d97fc\x2ddb89d3b8bf2d.mount: Deactivated successfully. Apr 21 10:31:28.210204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1-shm.mount: Deactivated successfully. Apr 21 10:31:28.215248 containerd[1584]: time="2026-04-21T10:31:28.215186651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d579d66fd-65qf7,Uid:29f602f8-9f3c-4ef2-b2a8-11d244890352,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.215843 kubelet[2677]: E0421 10:31:28.215785 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.215934 kubelet[2677]: E0421 10:31:28.215872 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb357869e94af4d44bf362543ddb4ffa1ee812cd64dfc29b4c364945aca33a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d579d66fd-65qf7" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.135 [INFO][3892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.140 [INFO][3892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" iface="eth0" netns="/var/run/netns/cni-014e9fe2-03ea-074e-a4db-bef90b82dcda" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.140 [INFO][3892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" iface="eth0" netns="/var/run/netns/cni-014e9fe2-03ea-074e-a4db-bef90b82dcda" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.141 [INFO][3892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" iface="eth0" netns="/var/run/netns/cni-014e9fe2-03ea-074e-a4db-bef90b82dcda" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.141 [INFO][3892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.141 [INFO][3892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.195 [INFO][3926] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" HandleID="k8s-pod-network.5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Workload="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.195 [INFO][3926] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.212 [INFO][3926] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.216 [WARNING][3926] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" HandleID="k8s-pod-network.5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Workload="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.216 [INFO][3926] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" HandleID="k8s-pod-network.5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Workload="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.218 [INFO][3926] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.222340 containerd[1584]: 2026-04-21 10:31:28.220 [INFO][3892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f" Apr 21 10:31:28.225420 systemd[1]: run-netns-cni\x2d014e9fe2\x2d03ea\x2d074e\x2da4db\x2dbef90b82dcda.mount: Deactivated successfully. Apr 21 10:31:28.227211 containerd[1584]: time="2026-04-21T10:31:28.227186255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c66f74b6d-g7cvz,Uid:8edab2e0-6da0-42e7-9867-a2817bcabbc7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.227818 kubelet[2677]: E0421 10:31:28.227555 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.227818 kubelet[2677]: E0421 10:31:28.227600 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c66f74b6d-g7cvz" Apr 21 10:31:28.227818 kubelet[2677]: E0421 10:31:28.227628 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c66f74b6d-g7cvz" Apr 21 10:31:28.227928 kubelet[2677]: E0421 10:31:28.227666 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c66f74b6d-g7cvz_calico-system(8edab2e0-6da0-42e7-9867-a2817bcabbc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c66f74b6d-g7cvz_calico-system(8edab2e0-6da0-42e7-9867-a2817bcabbc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c66f74b6d-g7cvz" podUID="8edab2e0-6da0-42e7-9867-a2817bcabbc7" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.134 [INFO][3861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.135 [INFO][3861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" iface="eth0" netns="/var/run/netns/cni-899b033b-6eef-0d2b-8fab-d1c3b5c23c7c" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.138 [INFO][3861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" iface="eth0" netns="/var/run/netns/cni-899b033b-6eef-0d2b-8fab-d1c3b5c23c7c" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.138 [INFO][3861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" iface="eth0" netns="/var/run/netns/cni-899b033b-6eef-0d2b-8fab-d1c3b5c23c7c" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.138 [INFO][3861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.138 [INFO][3861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.200 [INFO][3925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" HandleID="k8s-pod-network.d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Workload="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.201 [INFO][3925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.218 [INFO][3925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.225 [WARNING][3925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" HandleID="k8s-pod-network.d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Workload="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.225 [INFO][3925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" HandleID="k8s-pod-network.d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Workload="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.226 [INFO][3925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.232318 containerd[1584]: 2026-04-21 10:31:28.230 [INFO][3861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.133 [INFO][3827] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.133 [INFO][3827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" iface="eth0" netns="/var/run/netns/cni-7f54fa22-f115-242a-9e53-cdb4121f299a" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.134 [INFO][3827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" iface="eth0" netns="/var/run/netns/cni-7f54fa22-f115-242a-9e53-cdb4121f299a" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.134 [INFO][3827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" iface="eth0" netns="/var/run/netns/cni-7f54fa22-f115-242a-9e53-cdb4121f299a" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.134 [INFO][3827] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.134 [INFO][3827] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.191 [INFO][3915] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" HandleID="k8s-pod-network.35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Workload="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.192 [INFO][3915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.198 [INFO][3915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.204 [WARNING][3915] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" HandleID="k8s-pod-network.35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Workload="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.204 [INFO][3915] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" HandleID="k8s-pod-network.35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Workload="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.214 [INFO][3915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.233239 containerd[1584]: 2026-04-21 10:31:28.225 [INFO][3827] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298" Apr 21 10:31:28.236225 containerd[1584]: time="2026-04-21T10:31:28.236198566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-jlm8h,Uid:7ab37260-b13a-4e6c-a73a-39b7c6b94371,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.236669 kubelet[2677]: E0421 10:31:28.236637 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.236731 kubelet[2677]: E0421 10:31:28.236686 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-jlm8h" Apr 21 10:31:28.236731 kubelet[2677]: E0421 10:31:28.236724 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-jlm8h" Apr 21 10:31:28.236858 kubelet[2677]: E0421 10:31:28.236809 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-jlm8h_calico-system(7ab37260-b13a-4e6c-a73a-39b7c6b94371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-jlm8h_calico-system(7ab37260-b13a-4e6c-a73a-39b7c6b94371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-jlm8h" podUID="7ab37260-b13a-4e6c-a73a-39b7c6b94371" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.159 [INFO][3869] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.159 [INFO][3869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" iface="eth0" netns="/var/run/netns/cni-da82a244-3494-fa93-e898-1cd31ae0bee4" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.159 [INFO][3869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" iface="eth0" netns="/var/run/netns/cni-da82a244-3494-fa93-e898-1cd31ae0bee4" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.159 [INFO][3869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" iface="eth0" netns="/var/run/netns/cni-da82a244-3494-fa93-e898-1cd31ae0bee4" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.159 [INFO][3869] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.159 [INFO][3869] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.224 [INFO][3950] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" HandleID="k8s-pod-network.b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Workload="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.226 [INFO][3950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.226 [INFO][3950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.233 [WARNING][3950] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" HandleID="k8s-pod-network.b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Workload="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.233 [INFO][3950] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" HandleID="k8s-pod-network.b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Workload="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.234 [INFO][3950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.238583 containerd[1584]: 2026-04-21 10:31:28.236 [INFO][3869] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301" Apr 21 10:31:28.239416 containerd[1584]: time="2026-04-21T10:31:28.238605346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-xl724,Uid:3a945fca-2767-4f37-a50e-78d8316fd74a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.239495 kubelet[2677]: E0421 10:31:28.238723 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.239495 kubelet[2677]: E0421 10:31:28.238852 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-9d4f98f48-xl724" Apr 21 10:31:28.239495 kubelet[2677]: E0421 10:31:28.238868 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-9d4f98f48-xl724" Apr 21 10:31:28.239557 kubelet[2677]: E0421 10:31:28.238896 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d4f98f48-xl724_calico-system(3a945fca-2767-4f37-a50e-78d8316fd74a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d4f98f48-xl724_calico-system(3a945fca-2767-4f37-a50e-78d8316fd74a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-9d4f98f48-xl724" podUID="3a945fca-2767-4f37-a50e-78d8316fd74a" Apr 21 10:31:28.245697 containerd[1584]: time="2026-04-21T10:31:28.245657840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-rr2sp,Uid:a1c477ff-f862-41e4-b022-427067e53a81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.246262 kubelet[2677]: E0421 10:31:28.246211 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.246458 kubelet[2677]: E0421 10:31:28.246398 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-9d4f98f48-rr2sp" Apr 21 10:31:28.246458 kubelet[2677]: E0421 10:31:28.246417 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-9d4f98f48-rr2sp" Apr 21 10:31:28.246715 kubelet[2677]: E0421 10:31:28.246544 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d4f98f48-rr2sp_calico-system(a1c477ff-f862-41e4-b022-427067e53a81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d4f98f48-rr2sp_calico-system(a1c477ff-f862-41e4-b022-427067e53a81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-9d4f98f48-rr2sp" podUID="a1c477ff-f862-41e4-b022-427067e53a81" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.131 [INFO][3866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.131 [INFO][3866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" iface="eth0" netns="/var/run/netns/cni-d6d5d51b-c082-2602-7305-ec2742e2f9fa" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.131 [INFO][3866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" iface="eth0" netns="/var/run/netns/cni-d6d5d51b-c082-2602-7305-ec2742e2f9fa" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.132 [INFO][3866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" iface="eth0" netns="/var/run/netns/cni-d6d5d51b-c082-2602-7305-ec2742e2f9fa" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.132 [INFO][3866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.132 [INFO][3866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.240 [INFO][3914] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" HandleID="k8s-pod-network.39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Workload="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.240 [INFO][3914] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.242 [INFO][3914] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.249 [WARNING][3914] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" HandleID="k8s-pod-network.39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Workload="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.249 [INFO][3914] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" HandleID="k8s-pod-network.39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Workload="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.253 [INFO][3914] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.256325 containerd[1584]: 2026-04-21 10:31:28.254 [INFO][3866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4" Apr 21 10:31:28.259218 containerd[1584]: time="2026-04-21T10:31:28.259188401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bz8j8,Uid:27e19655-f488-4a2a-bd85-641f0d0be96a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.259590 kubelet[2677]: E0421 10:31:28.259551 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:31:28.259632 kubelet[2677]: E0421 10:31:28.259594 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bz8j8" Apr 21 10:31:28.259632 kubelet[2677]: E0421 10:31:28.259607 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bz8j8" Apr 21 10:31:28.259669 kubelet[2677]: E0421 10:31:28.259652 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bz8j8_kube-system(27e19655-f488-4a2a-bd85-641f0d0be96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bz8j8_kube-system(27e19655-f488-4a2a-bd85-641f0d0be96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bz8j8" podUID="27e19655-f488-4a2a-bd85-641f0d0be96a" Apr 21 10:31:28.733654 kubelet[2677]: I0421 10:31:28.733627 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:28.734775 containerd[1584]: time="2026-04-21T10:31:28.734153170Z" level=info msg="StopPodSandbox for \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\"" Apr 21 10:31:28.734775 containerd[1584]: time="2026-04-21T10:31:28.734305884Z" level=info msg="Ensure that sandbox 2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d in task-service has been cleanup successfully" Apr 21 10:31:28.738389 kubelet[2677]: E0421 10:31:28.736679 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:28.738473 containerd[1584]: time="2026-04-21T10:31:28.736656242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-xl724,Uid:3a945fca-2767-4f37-a50e-78d8316fd74a,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:28.738473 containerd[1584]: time="2026-04-21T10:31:28.738418962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bz8j8,Uid:27e19655-f488-4a2a-bd85-641f0d0be96a,Namespace:kube-system,Attempt:0,}" Apr 21 10:31:28.738571 containerd[1584]: time="2026-04-21T10:31:28.738553188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-jlm8h,Uid:7ab37260-b13a-4e6c-a73a-39b7c6b94371,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:28.738754 containerd[1584]: time="2026-04-21T10:31:28.738707465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-rr2sp,Uid:a1c477ff-f862-41e4-b022-427067e53a81,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:28.738894 containerd[1584]: time="2026-04-21T10:31:28.738859229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c66f74b6d-g7cvz,Uid:8edab2e0-6da0-42e7-9867-a2817bcabbc7,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:28.754611 kubelet[2677]: I0421 10:31:28.754549 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qtpqf" podStartSLOduration=2.159379434 podStartE2EDuration="26.754534655s" podCreationTimestamp="2026-04-21 10:31:02 +0000 UTC" firstStartedPulling="2026-04-21 10:31:02.401839383 +0000 UTC m=+15.945221758" lastFinishedPulling="2026-04-21 10:31:26.996994604 +0000 UTC m=+40.540376979" observedRunningTime="2026-04-21 10:31:28.752780332 +0000 UTC m=+42.296162714" watchObservedRunningTime="2026-04-21 10:31:28.754534655 +0000 UTC m=+42.297917040" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.793 [INFO][3994] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.793 [INFO][3994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" iface="eth0" netns="/var/run/netns/cni-ab42f957-ec46-6141-557e-635ee889776f" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.793 [INFO][3994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" iface="eth0" netns="/var/run/netns/cni-ab42f957-ec46-6141-557e-635ee889776f" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.793 [INFO][3994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" iface="eth0" netns="/var/run/netns/cni-ab42f957-ec46-6141-557e-635ee889776f" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.793 [INFO][3994] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.793 [INFO][3994] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.816 [INFO][4069] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.816 [INFO][4069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.816 [INFO][4069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.821 [WARNING][4069] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.821 [INFO][4069] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.823 [INFO][4069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.831982 containerd[1584]: 2026-04-21 10:31:28.825 [INFO][3994] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:28.836301 containerd[1584]: time="2026-04-21T10:31:28.833468306Z" level=info msg="TearDown network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\" successfully" Apr 21 10:31:28.836301 containerd[1584]: time="2026-04-21T10:31:28.833491626Z" level=info msg="StopPodSandbox for \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\" returns successfully" Apr 21 10:31:28.836816 kubelet[2677]: E0421 10:31:28.836512 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:28.837618 containerd[1584]: time="2026-04-21T10:31:28.837299468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fjjgs,Uid:b8131e61-a17e-4cca-839f-e8ca9415fa72,Namespace:kube-system,Attempt:1,}" Apr 21 10:31:28.856636 kubelet[2677]: I0421 10:31:28.856615 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-nginx-config\") pod \"29f602f8-9f3c-4ef2-b2a8-11d244890352\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " Apr 21 10:31:28.857084 kubelet[2677]: I0421 10:31:28.857055 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-backend-key-pair\") pod \"29f602f8-9f3c-4ef2-b2a8-11d244890352\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " Apr 21 10:31:28.857715 kubelet[2677]: I0421 10:31:28.857705 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh75p\" (UniqueName: \"kubernetes.io/projected/29f602f8-9f3c-4ef2-b2a8-11d244890352-kube-api-access-nh75p\") pod \"29f602f8-9f3c-4ef2-b2a8-11d244890352\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " Apr 21 10:31:28.857838 kubelet[2677]: I0421 10:31:28.857830 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-ca-bundle\") pod \"29f602f8-9f3c-4ef2-b2a8-11d244890352\" (UID: \"29f602f8-9f3c-4ef2-b2a8-11d244890352\") " Apr 21 10:31:28.858771 kubelet[2677]: I0421 10:31:28.856971 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "29f602f8-9f3c-4ef2-b2a8-11d244890352" (UID: "29f602f8-9f3c-4ef2-b2a8-11d244890352"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:31:28.858834 kubelet[2677]: I0421 10:31:28.858084 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "29f602f8-9f3c-4ef2-b2a8-11d244890352" (UID: "29f602f8-9f3c-4ef2-b2a8-11d244890352"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:31:28.864207 kubelet[2677]: I0421 10:31:28.864193 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f602f8-9f3c-4ef2-b2a8-11d244890352-kube-api-access-nh75p" (OuterVolumeSpecName: "kube-api-access-nh75p") pod "29f602f8-9f3c-4ef2-b2a8-11d244890352" (UID: "29f602f8-9f3c-4ef2-b2a8-11d244890352"). InnerVolumeSpecName "kube-api-access-nh75p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:31:28.869488 kubelet[2677]: I0421 10:31:28.869455 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "29f602f8-9f3c-4ef2-b2a8-11d244890352" (UID: "29f602f8-9f3c-4ef2-b2a8-11d244890352"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:31:28.935251 systemd-networkd[1253]: cali92af6359f72: Link UP Apr 21 10:31:28.935547 systemd-networkd[1253]: cali92af6359f72: Gained carrier Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.807 [ERROR][4027] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.818 [INFO][4027] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0 coredns-674b8bbfcf- kube-system 27e19655-f488-4a2a-bd85-641f0d0be96a 940 0 2026-04-21 10:30:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bz8j8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92af6359f72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.818 [INFO][4027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.853 [INFO][4091] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" HandleID="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Workload="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.872 [INFO][4091] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" HandleID="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Workload="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bz8j8", "timestamp":"2026-04-21 10:31:28.853593767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000537600)} Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.872 [INFO][4091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.872 [INFO][4091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.872 [INFO][4091] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.875 [INFO][4091] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.881 [INFO][4091] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.889 [INFO][4091] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.896 [INFO][4091] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.899 [INFO][4091] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.899 [INFO][4091] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.900 [INFO][4091] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.903 [INFO][4091] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.909 [INFO][4091] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.909 [INFO][4091] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" host="localhost" Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.909 [INFO][4091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:28.953246 containerd[1584]: 2026-04-21 10:31:28.910 [INFO][4091] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" HandleID="k8s-pod-network.28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Workload="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.953710 containerd[1584]: 2026-04-21 10:31:28.911 [INFO][4027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27e19655-f488-4a2a-bd85-641f0d0be96a", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bz8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92af6359f72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:28.953710 containerd[1584]: 2026-04-21 10:31:28.911 [INFO][4027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.953710 containerd[1584]: 2026-04-21 10:31:28.911 [INFO][4027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92af6359f72 ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.953710 containerd[1584]: 2026-04-21 10:31:28.937 [INFO][4027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.953710 containerd[1584]: 2026-04-21 10:31:28.939 [INFO][4027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27e19655-f488-4a2a-bd85-641f0d0be96a", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d", Pod:"coredns-674b8bbfcf-bz8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92af6359f72", MAC:"e6:88:b7:58:43:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:28.953710 containerd[1584]: 2026-04-21 10:31:28.949 [INFO][4027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d" Namespace="kube-system" Pod="coredns-674b8bbfcf-bz8j8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bz8j8-eth0" Apr 21 10:31:28.960471 kubelet[2677]: I0421 10:31:28.959950 2677 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 21 10:31:28.960471 kubelet[2677]: I0421 10:31:28.959971 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 21 10:31:28.960471 kubelet[2677]: I0421 10:31:28.959979 2677 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nh75p\" (UniqueName: \"kubernetes.io/projected/29f602f8-9f3c-4ef2-b2a8-11d244890352-kube-api-access-nh75p\") on node \"localhost\" DevicePath \"\"" Apr 21 10:31:28.960471 kubelet[2677]: I0421 10:31:28.959986 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29f602f8-9f3c-4ef2-b2a8-11d244890352-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 21 10:31:29.017119 systemd[1]: run-netns-cni\x2d899b033b\x2d6eef\x2d0d2b\x2d8fab\x2dd1c3b5c23c7c.mount: Deactivated successfully. Apr 21 10:31:29.018126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2b8a9889a2472a654b74464f673dd9311a052abfae002c44b8eb9966f027bac-shm.mount: Deactivated successfully. Apr 21 10:31:29.018215 systemd[1]: run-netns-cni\x2dda82a244\x2d3494\x2dfa93\x2de898\x2d1cd31ae0bee4.mount: Deactivated successfully. Apr 21 10:31:29.018274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8ef5c2d7d5aa284280b17b27259464538154a76184c386897a6316633035301-shm.mount: Deactivated successfully. Apr 21 10:31:29.018362 systemd[1]: run-netns-cni\x2d7f54fa22\x2df115\x2d242a\x2d9e53\x2dcdb4121f299a.mount: Deactivated successfully. Apr 21 10:31:29.018447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35603903b47645a5eb3b573c3377a8915b8cd5a86224b88544eb93539087d298-shm.mount: Deactivated successfully. Apr 21 10:31:29.019211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5054e3cdfa9a054cdb4d599beb3a50f3b32d9cc05f4e93c8cedafe38cae4fc4f-shm.mount: Deactivated successfully. Apr 21 10:31:29.019277 systemd[1]: run-netns-cni\x2dd6d5d51b\x2dc082\x2d2602\x2d7305\x2dec2742e2f9fa.mount: Deactivated successfully. Apr 21 10:31:29.019333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39bda9bb38f49cb42a27d4b4fbae8dcc0464faddc5b53259e4045c625647b0d4-shm.mount: Deactivated successfully. Apr 21 10:31:29.019394 systemd[1]: run-netns-cni\x2dab42f957\x2dec46\x2d6141\x2d557e\x2d635ee889776f.mount: Deactivated successfully. Apr 21 10:31:29.020127 systemd[1]: var-lib-kubelet-pods-29f602f8\x2d9f3c\x2d4ef2\x2db2a8\x2d11d244890352-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnh75p.mount: Deactivated successfully. Apr 21 10:31:29.020579 systemd[1]: var-lib-kubelet-pods-29f602f8\x2d9f3c\x2d4ef2\x2db2a8\x2d11d244890352-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:31:29.027006 containerd[1584]: time="2026-04-21T10:31:29.026730551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:29.027006 containerd[1584]: time="2026-04-21T10:31:29.026820918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:29.027006 containerd[1584]: time="2026-04-21T10:31:29.026839506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.027006 containerd[1584]: time="2026-04-21T10:31:29.026927628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.049110 systemd-networkd[1253]: cali26b20ead4ff: Link UP Apr 21 10:31:29.049432 systemd-networkd[1253]: cali26b20ead4ff: Gained carrier Apr 21 10:31:29.062430 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.868 [ERROR][4039] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.879 [INFO][4039] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--jlm8h-eth0 goldmane-5b85766d88- calico-system 7ab37260-b13a-4e6c-a73a-39b7c6b94371 939 0 2026-04-21 10:31:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-jlm8h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali26b20ead4ff [] [] }} ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.879 [INFO][4039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.919 [INFO][4130] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" HandleID="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Workload="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.928 [INFO][4130] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" HandleID="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Workload="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-jlm8h", "timestamp":"2026-04-21 10:31:28.919182324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000036dc0)} Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.928 [INFO][4130] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.929 [INFO][4130] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.929 [INFO][4130] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:28.976 [INFO][4130] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.011 [INFO][4130] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.019 [INFO][4130] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.021 [INFO][4130] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.023 [INFO][4130] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.023 [INFO][4130] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.026 [INFO][4130] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65 Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.029 [INFO][4130] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.037 [INFO][4130] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.037 [INFO][4130] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" host="localhost" Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.037 [INFO][4130] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:29.064818 containerd[1584]: 2026-04-21 10:31:29.037 [INFO][4130] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" HandleID="k8s-pod-network.fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Workload="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.065270 containerd[1584]: 2026-04-21 10:31:29.042 [INFO][4039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--jlm8h-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7ab37260-b13a-4e6c-a73a-39b7c6b94371", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-jlm8h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26b20ead4ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.065270 containerd[1584]: 2026-04-21 10:31:29.042 [INFO][4039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.065270 containerd[1584]: 2026-04-21 10:31:29.042 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26b20ead4ff ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.065270 containerd[1584]: 2026-04-21 10:31:29.050 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.065270 containerd[1584]: 2026-04-21 10:31:29.052 [INFO][4039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--jlm8h-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7ab37260-b13a-4e6c-a73a-39b7c6b94371", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65", Pod:"goldmane-5b85766d88-jlm8h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26b20ead4ff", MAC:"0a:80:e1:d1:d7:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.065270 containerd[1584]: 2026-04-21 10:31:29.063 [INFO][4039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65" Namespace="calico-system" Pod="goldmane-5b85766d88-jlm8h" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--jlm8h-eth0" Apr 21 10:31:29.083289 containerd[1584]: time="2026-04-21T10:31:29.083206533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:29.083289 containerd[1584]: time="2026-04-21T10:31:29.083250457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:29.083289 containerd[1584]: time="2026-04-21T10:31:29.083258874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.083496 containerd[1584]: time="2026-04-21T10:31:29.083315505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.089440 containerd[1584]: time="2026-04-21T10:31:29.089418456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bz8j8,Uid:27e19655-f488-4a2a-bd85-641f0d0be96a,Namespace:kube-system,Attempt:0,} returns sandbox id \"28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d\"" Apr 21 10:31:29.090189 kubelet[2677]: E0421 10:31:29.090168 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:29.096303 containerd[1584]: time="2026-04-21T10:31:29.095650255Z" level=info msg="CreateContainer within sandbox \"28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:31:29.105553 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:29.109660 containerd[1584]: time="2026-04-21T10:31:29.109591487Z" level=info msg="CreateContainer within sandbox \"28d8f00375150f693bd704b6dc2499c64eb0fe0e42b5c6635fd48d504074068d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"995263a1c34d3bb235013e63f51e3058883973c0606fea5dbf80b676b97f55bf\"" Apr 21 10:31:29.110888 containerd[1584]: time="2026-04-21T10:31:29.110871086Z" level=info msg="StartContainer for \"995263a1c34d3bb235013e63f51e3058883973c0606fea5dbf80b676b97f55bf\"" Apr 21 10:31:29.144657 systemd-networkd[1253]: calid6a3f0d7217: Link UP Apr 21 10:31:29.144913 systemd-networkd[1253]: calid6a3f0d7217: Gained carrier Apr 21 10:31:29.149147 containerd[1584]: time="2026-04-21T10:31:29.148248055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-jlm8h,Uid:7ab37260-b13a-4e6c-a73a-39b7c6b94371,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65\"" Apr 21 10:31:29.152701 containerd[1584]: time="2026-04-21T10:31:29.152536652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:28.866 [ERROR][4041] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:28.877 [INFO][4041] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0 calico-apiserver-9d4f98f48- calico-system a1c477ff-f862-41e4-b022-427067e53a81 944 0 2026-04-21 10:31:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d4f98f48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d4f98f48-rr2sp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid6a3f0d7217 [] [] }} ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:28.877 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:28.934 [INFO][4143] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" HandleID="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Workload="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:28.945 [INFO][4143] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" HandleID="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Workload="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-9d4f98f48-rr2sp", "timestamp":"2026-04-21 10:31:28.93493937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00019cdc0)} Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:28.945 [INFO][4143] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.038 [INFO][4143] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.038 [INFO][4143] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.076 [INFO][4143] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.095 [INFO][4143] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.119 [INFO][4143] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.120 [INFO][4143] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.122 [INFO][4143] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.122 [INFO][4143] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.124 [INFO][4143] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23 Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.128 [INFO][4143] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.136 [INFO][4143] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.136 [INFO][4143] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" host="localhost" Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.137 [INFO][4143] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:29.165240 containerd[1584]: 2026-04-21 10:31:29.137 [INFO][4143] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" HandleID="k8s-pod-network.d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Workload="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.165663 containerd[1584]: 2026-04-21 10:31:29.141 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0", GenerateName:"calico-apiserver-9d4f98f48-", Namespace:"calico-system", SelfLink:"", UID:"a1c477ff-f862-41e4-b022-427067e53a81", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d4f98f48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d4f98f48-rr2sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid6a3f0d7217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.165663 containerd[1584]: 2026-04-21 10:31:29.142 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.165663 containerd[1584]: 2026-04-21 10:31:29.142 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6a3f0d7217 ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.165663 containerd[1584]: 2026-04-21 10:31:29.144 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.165663 containerd[1584]: 2026-04-21 10:31:29.146 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0", GenerateName:"calico-apiserver-9d4f98f48-", Namespace:"calico-system", SelfLink:"", UID:"a1c477ff-f862-41e4-b022-427067e53a81", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d4f98f48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23", Pod:"calico-apiserver-9d4f98f48-rr2sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid6a3f0d7217", MAC:"6a:bd:ab:1a:73:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.165663 containerd[1584]: 2026-04-21 10:31:29.162 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-rr2sp" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--rr2sp-eth0" Apr 21 10:31:29.167197 containerd[1584]: time="2026-04-21T10:31:29.167158922Z" level=info msg="StartContainer for \"995263a1c34d3bb235013e63f51e3058883973c0606fea5dbf80b676b97f55bf\" returns successfully" Apr 21 10:31:29.191004 containerd[1584]: time="2026-04-21T10:31:29.186544198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:29.191004 containerd[1584]: time="2026-04-21T10:31:29.186620391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:29.191004 containerd[1584]: time="2026-04-21T10:31:29.186633908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.191004 containerd[1584]: time="2026-04-21T10:31:29.187022639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.210690 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:29.249511 containerd[1584]: time="2026-04-21T10:31:29.249470383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-rr2sp,Uid:a1c477ff-f862-41e4-b022-427067e53a81,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23\"" Apr 21 10:31:29.258679 systemd-networkd[1253]: calicefb16d857e: Link UP Apr 21 10:31:29.259858 systemd-networkd[1253]: calicefb16d857e: Gained carrier Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:28.846 [ERROR][4000] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:28.871 [INFO][4000] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0 calico-kube-controllers-c66f74b6d- calico-system 8edab2e0-6da0-42e7-9867-a2817bcabbc7 942 0 2026-04-21 10:31:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c66f74b6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c66f74b6d-g7cvz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicefb16d857e [] [] }} ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:28.874 [INFO][4000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:28.956 [INFO][4126] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" HandleID="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Workload="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:28.968 [INFO][4126] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" HandleID="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Workload="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c66f74b6d-g7cvz", "timestamp":"2026-04-21 10:31:28.956955716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003af1e0)} Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:28.968 [INFO][4126] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.137 [INFO][4126] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.137 [INFO][4126] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.178 [INFO][4126] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.196 [INFO][4126] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.221 [INFO][4126] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.223 [INFO][4126] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.225 [INFO][4126] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.225 [INFO][4126] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.228 [INFO][4126] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763 Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.237 [INFO][4126] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.244 [INFO][4126] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.244 [INFO][4126] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" host="localhost" Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.244 [INFO][4126] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:29.275035 containerd[1584]: 2026-04-21 10:31:29.244 [INFO][4126] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" HandleID="k8s-pod-network.4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Workload="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.277300 containerd[1584]: 2026-04-21 10:31:29.251 [INFO][4000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0", GenerateName:"calico-kube-controllers-c66f74b6d-", Namespace:"calico-system", SelfLink:"", UID:"8edab2e0-6da0-42e7-9867-a2817bcabbc7", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c66f74b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c66f74b6d-g7cvz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicefb16d857e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.277300 containerd[1584]: 2026-04-21 10:31:29.253 [INFO][4000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.277300 containerd[1584]: 2026-04-21 10:31:29.253 [INFO][4000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicefb16d857e ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.277300 containerd[1584]: 2026-04-21 10:31:29.260 [INFO][4000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.277300 containerd[1584]: 2026-04-21 10:31:29.260 [INFO][4000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0", GenerateName:"calico-kube-controllers-c66f74b6d-", Namespace:"calico-system", SelfLink:"", UID:"8edab2e0-6da0-42e7-9867-a2817bcabbc7", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c66f74b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763", Pod:"calico-kube-controllers-c66f74b6d-g7cvz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicefb16d857e", MAC:"6a:9e:ad:2b:ba:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.277300 containerd[1584]: 2026-04-21 10:31:29.272 [INFO][4000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763" Namespace="calico-system" Pod="calico-kube-controllers-c66f74b6d-g7cvz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c66f74b6d--g7cvz-eth0" Apr 21 10:31:29.301307 containerd[1584]: time="2026-04-21T10:31:29.301107011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:29.301307 containerd[1584]: time="2026-04-21T10:31:29.301203827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:29.301307 containerd[1584]: time="2026-04-21T10:31:29.301212771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.301454 containerd[1584]: time="2026-04-21T10:31:29.301289813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.338533 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:29.362835 systemd-networkd[1253]: cali67acf7d155e: Link UP Apr 21 10:31:29.362998 systemd-networkd[1253]: cali67acf7d155e: Gained carrier Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:28.859 [ERROR][4012] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:28.877 [INFO][4012] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0 calico-apiserver-9d4f98f48- calico-system 3a945fca-2767-4f37-a50e-78d8316fd74a 941 0 2026-04-21 10:31:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d4f98f48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d4f98f48-xl724 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali67acf7d155e [] [] }} ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:28.877 [INFO][4012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:28.954 [INFO][4128] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" HandleID="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Workload="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:28.968 [INFO][4128] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" HandleID="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Workload="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-9d4f98f48-xl724", "timestamp":"2026-04-21 10:31:28.954579287 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002cb760)} Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:28.968 [INFO][4128] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.244 [INFO][4128] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.244 [INFO][4128] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.289 [INFO][4128] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.297 [INFO][4128] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.322 [INFO][4128] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.324 [INFO][4128] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.327 [INFO][4128] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.327 [INFO][4128] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.328 [INFO][4128] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.335 [INFO][4128] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.348 [INFO][4128] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.348 [INFO][4128] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" host="localhost" Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.348 [INFO][4128] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:29.376506 containerd[1584]: 2026-04-21 10:31:29.348 [INFO][4128] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" HandleID="k8s-pod-network.bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Workload="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.376997 containerd[1584]: 2026-04-21 10:31:29.357 [INFO][4012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0", GenerateName:"calico-apiserver-9d4f98f48-", Namespace:"calico-system", SelfLink:"", UID:"3a945fca-2767-4f37-a50e-78d8316fd74a", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d4f98f48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d4f98f48-xl724", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali67acf7d155e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.376997 containerd[1584]: 2026-04-21 10:31:29.357 [INFO][4012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.376997 containerd[1584]: 2026-04-21 10:31:29.357 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67acf7d155e ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.376997 containerd[1584]: 2026-04-21 10:31:29.359 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.376997 containerd[1584]: 2026-04-21 10:31:29.361 [INFO][4012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0", GenerateName:"calico-apiserver-9d4f98f48-", Namespace:"calico-system", SelfLink:"", UID:"3a945fca-2767-4f37-a50e-78d8316fd74a", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d4f98f48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b", Pod:"calico-apiserver-9d4f98f48-xl724", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali67acf7d155e", MAC:"ae:ed:58:b7:c6:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.376997 containerd[1584]: 2026-04-21 10:31:29.371 [INFO][4012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b" Namespace="calico-system" Pod="calico-apiserver-9d4f98f48-xl724" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d4f98f48--xl724-eth0" Apr 21 10:31:29.392273 containerd[1584]: time="2026-04-21T10:31:29.392188228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c66f74b6d-g7cvz,Uid:8edab2e0-6da0-42e7-9867-a2817bcabbc7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763\"" Apr 21 10:31:29.405065 containerd[1584]: time="2026-04-21T10:31:29.404983565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:29.405639 containerd[1584]: time="2026-04-21T10:31:29.405412089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:29.405639 containerd[1584]: time="2026-04-21T10:31:29.405430375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.405639 containerd[1584]: time="2026-04-21T10:31:29.405559795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.445845 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:29.454280 systemd-networkd[1253]: calif7e7ddc1745: Link UP Apr 21 10:31:29.454482 systemd-networkd[1253]: calif7e7ddc1745: Gained carrier Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:28.935 [ERROR][4110] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:28.952 [INFO][4110] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0 coredns-674b8bbfcf- kube-system b8131e61-a17e-4cca-839f-e8ca9415fa72 965 0 2026-04-21 10:30:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fjjgs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7e7ddc1745 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:28.952 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.034 [INFO][4175] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" HandleID="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.043 [INFO][4175] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" HandleID="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003160b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fjjgs", "timestamp":"2026-04-21 10:31:29.034807415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff600)} Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.043 [INFO][4175] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.348 [INFO][4175] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.348 [INFO][4175] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.386 [INFO][4175] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.403 [INFO][4175] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.421 [INFO][4175] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.424 [INFO][4175] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.427 [INFO][4175] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.428 [INFO][4175] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.433 [INFO][4175] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7 Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.440 [INFO][4175] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.449 [INFO][4175] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.449 [INFO][4175] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" host="localhost" Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.449 [INFO][4175] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:29.469678 containerd[1584]: 2026-04-21 10:31:29.449 [INFO][4175] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" HandleID="k8s-pod-network.2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.470261 containerd[1584]: 2026-04-21 10:31:29.451 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b8131e61-a17e-4cca-839f-e8ca9415fa72", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fjjgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7e7ddc1745", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.470261 containerd[1584]: 2026-04-21 10:31:29.451 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.470261 containerd[1584]: 2026-04-21 10:31:29.451 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7e7ddc1745 ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.470261 containerd[1584]: 2026-04-21 10:31:29.453 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.470261 containerd[1584]: 2026-04-21 10:31:29.455 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b8131e61-a17e-4cca-839f-e8ca9415fa72", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7", Pod:"coredns-674b8bbfcf-fjjgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7e7ddc1745", MAC:"da:66:69:8a:2f:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:29.470261 containerd[1584]: 2026-04-21 10:31:29.466 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fjjgs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:29.489233 containerd[1584]: time="2026-04-21T10:31:29.488371973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:29.489233 containerd[1584]: time="2026-04-21T10:31:29.488415806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:29.489233 containerd[1584]: time="2026-04-21T10:31:29.488424387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.489233 containerd[1584]: time="2026-04-21T10:31:29.488497546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:29.504495 containerd[1584]: time="2026-04-21T10:31:29.504262062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d4f98f48-xl724,Uid:3a945fca-2767-4f37-a50e-78d8316fd74a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b\"" Apr 21 10:31:29.518548 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:29.555558 containerd[1584]: time="2026-04-21T10:31:29.555147704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fjjgs,Uid:b8131e61-a17e-4cca-839f-e8ca9415fa72,Namespace:kube-system,Attempt:1,} returns sandbox id \"2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7\"" Apr 21 10:31:29.558036 kubelet[2677]: E0421 10:31:29.558009 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:29.567682 containerd[1584]: time="2026-04-21T10:31:29.567654004Z" level=info msg="CreateContainer within sandbox \"2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:31:29.588401 containerd[1584]: time="2026-04-21T10:31:29.588293279Z" level=info msg="CreateContainer within sandbox \"2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35aa83d680b4217d4e59bb0ba86ba9fedce05d8fa8cf3360628e689a27014346\"" Apr 21 10:31:29.589052 containerd[1584]: time="2026-04-21T10:31:29.588996904Z" level=info msg="StartContainer for \"35aa83d680b4217d4e59bb0ba86ba9fedce05d8fa8cf3360628e689a27014346\"" Apr 21 10:31:29.605470 kernel: calico-node[4536]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:31:29.645216 containerd[1584]: time="2026-04-21T10:31:29.645171874Z" level=info msg="StartContainer for \"35aa83d680b4217d4e59bb0ba86ba9fedce05d8fa8cf3360628e689a27014346\" returns successfully" Apr 21 10:31:29.744851 kubelet[2677]: E0421 10:31:29.744626 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:29.745824 kubelet[2677]: E0421 10:31:29.744905 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:29.773456 kubelet[2677]: I0421 10:31:29.773396 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fjjgs" podStartSLOduration=37.773379394 podStartE2EDuration="37.773379394s" podCreationTimestamp="2026-04-21 10:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:31:29.75634906 +0000 UTC m=+43.299731445" watchObservedRunningTime="2026-04-21 10:31:29.773379394 +0000 UTC m=+43.316761779" Apr 21 10:31:29.791660 kubelet[2677]: I0421 10:31:29.791571 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bz8j8" podStartSLOduration=37.791559163 podStartE2EDuration="37.791559163s" podCreationTimestamp="2026-04-21 10:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:31:29.773580578 +0000 UTC m=+43.316962963" watchObservedRunningTime="2026-04-21 10:31:29.791559163 +0000 UTC m=+43.334941544" Apr 21 10:31:29.968769 kubelet[2677]: I0421 10:31:29.968685 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/46ba7d6f-2e97-4521-9856-864850d9e108-nginx-config\") pod \"whisker-7bd4bd579f-jtvvr\" (UID: \"46ba7d6f-2e97-4521-9856-864850d9e108\") " pod="calico-system/whisker-7bd4bd579f-jtvvr" Apr 21 10:31:29.968769 kubelet[2677]: I0421 10:31:29.968721 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46ba7d6f-2e97-4521-9856-864850d9e108-whisker-backend-key-pair\") pod \"whisker-7bd4bd579f-jtvvr\" (UID: \"46ba7d6f-2e97-4521-9856-864850d9e108\") " pod="calico-system/whisker-7bd4bd579f-jtvvr" Apr 21 10:31:29.968769 kubelet[2677]: I0421 10:31:29.968774 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ba7d6f-2e97-4521-9856-864850d9e108-whisker-ca-bundle\") pod \"whisker-7bd4bd579f-jtvvr\" (UID: \"46ba7d6f-2e97-4521-9856-864850d9e108\") " pod="calico-system/whisker-7bd4bd579f-jtvvr" Apr 21 10:31:29.968914 kubelet[2677]: I0421 10:31:29.968788 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkmx\" (UniqueName: \"kubernetes.io/projected/46ba7d6f-2e97-4521-9856-864850d9e108-kube-api-access-spkmx\") pod \"whisker-7bd4bd579f-jtvvr\" (UID: \"46ba7d6f-2e97-4521-9856-864850d9e108\") " pod="calico-system/whisker-7bd4bd579f-jtvvr" Apr 21 10:31:30.023965 systemd-networkd[1253]: vxlan.calico: Link UP Apr 21 10:31:30.023980 systemd-networkd[1253]: vxlan.calico: Gained carrier Apr 21 10:31:30.137580 containerd[1584]: time="2026-04-21T10:31:30.137488702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd4bd579f-jtvvr,Uid:46ba7d6f-2e97-4521-9856-864850d9e108,Namespace:calico-system,Attempt:0,}" Apr 21 10:31:30.178171 systemd-networkd[1253]: cali26b20ead4ff: Gained IPv6LL Apr 21 10:31:30.241869 systemd-networkd[1253]: cali92af6359f72: Gained IPv6LL Apr 21 10:31:30.273512 systemd-networkd[1253]: cali09423e1d595: Link UP Apr 21 10:31:30.274160 systemd-networkd[1253]: cali09423e1d595: Gained carrier Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.201 [INFO][4747] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0 whisker-7bd4bd579f- calico-system 46ba7d6f-2e97-4521-9856-864850d9e108 1024 0 2026-04-21 10:31:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bd4bd579f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7bd4bd579f-jtvvr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali09423e1d595 [] [] }} ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.201 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.223 [INFO][4763] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" HandleID="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Workload="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.229 [INFO][4763] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" HandleID="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Workload="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7bd4bd579f-jtvvr", "timestamp":"2026-04-21 10:31:30.223056236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff600)} Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.230 [INFO][4763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.230 [INFO][4763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.230 [INFO][4763] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.233 [INFO][4763] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.241 [INFO][4763] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.247 [INFO][4763] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.250 [INFO][4763] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.256 [INFO][4763] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.256 [INFO][4763] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.258 [INFO][4763] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230 Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.262 [INFO][4763] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.268 [INFO][4763] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.268 [INFO][4763] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" host="localhost" Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.268 [INFO][4763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:30.286636 containerd[1584]: 2026-04-21 10:31:30.268 [INFO][4763] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" HandleID="k8s-pod-network.18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Workload="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.287134 containerd[1584]: 2026-04-21 10:31:30.271 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0", GenerateName:"whisker-7bd4bd579f-", Namespace:"calico-system", SelfLink:"", UID:"46ba7d6f-2e97-4521-9856-864850d9e108", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd4bd579f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7bd4bd579f-jtvvr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09423e1d595", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:30.287134 containerd[1584]: 2026-04-21 10:31:30.271 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.287134 containerd[1584]: 2026-04-21 10:31:30.271 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09423e1d595 ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.287134 containerd[1584]: 2026-04-21 10:31:30.274 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.287134 containerd[1584]: 2026-04-21 10:31:30.277 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0", GenerateName:"whisker-7bd4bd579f-", Namespace:"calico-system", SelfLink:"", UID:"46ba7d6f-2e97-4521-9856-864850d9e108", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd4bd579f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230", Pod:"whisker-7bd4bd579f-jtvvr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09423e1d595", MAC:"b6:a5:7c:53:a6:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:30.287134 containerd[1584]: 2026-04-21 10:31:30.284 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230" Namespace="calico-system" Pod="whisker-7bd4bd579f-jtvvr" WorkloadEndpoint="localhost-k8s-whisker--7bd4bd579f--jtvvr-eth0" Apr 21 10:31:30.309513 containerd[1584]: time="2026-04-21T10:31:30.309436737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:30.310792 containerd[1584]: time="2026-04-21T10:31:30.310083901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:30.310792 containerd[1584]: time="2026-04-21T10:31:30.310139064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:30.310792 containerd[1584]: time="2026-04-21T10:31:30.310370335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:30.335894 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:30.361801 containerd[1584]: time="2026-04-21T10:31:30.361134965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd4bd579f-jtvvr,Uid:46ba7d6f-2e97-4521-9856-864850d9e108,Namespace:calico-system,Attempt:0,} returns sandbox id \"18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230\"" Apr 21 10:31:30.538137 kubelet[2677]: I0421 10:31:30.538018 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f602f8-9f3c-4ef2-b2a8-11d244890352" path="/var/lib/kubelet/pods/29f602f8-9f3c-4ef2-b2a8-11d244890352/volumes" Apr 21 10:31:30.625937 systemd-networkd[1253]: calid6a3f0d7217: Gained IPv6LL Apr 21 10:31:30.758239 kubelet[2677]: E0421 10:31:30.758079 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:30.759607 kubelet[2677]: E0421 10:31:30.759282 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:30.818062 systemd-networkd[1253]: cali67acf7d155e: Gained IPv6LL Apr 21 10:31:31.137885 systemd-networkd[1253]: calicefb16d857e: Gained IPv6LL Apr 21 10:31:31.329866 systemd-networkd[1253]: calif7e7ddc1745: Gained IPv6LL Apr 21 10:31:31.413937 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:45132.service - OpenSSH per-connection server daemon (10.0.0.1:45132). Apr 21 10:31:31.429216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375981803.mount: Deactivated successfully. Apr 21 10:31:31.451661 sshd[4905]: Accepted publickey for core from 10.0.0.1 port 45132 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:31.452636 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:31.457590 systemd-logind[1562]: New session 9 of user core. Apr 21 10:31:31.458438 systemd-networkd[1253]: cali09423e1d595: Gained IPv6LL Apr 21 10:31:31.464009 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:31:31.651289 sshd[4905]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:31.655061 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:31:31.655214 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:45132.service: Deactivated successfully. Apr 21 10:31:31.656940 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:31:31.658021 systemd-logind[1562]: Removed session 9. Apr 21 10:31:31.713983 systemd-networkd[1253]: vxlan.calico: Gained IPv6LL Apr 21 10:31:31.743040 containerd[1584]: time="2026-04-21T10:31:31.742976528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:31.743516 containerd[1584]: time="2026-04-21T10:31:31.743473256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:31:31.744161 containerd[1584]: time="2026-04-21T10:31:31.744142411Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:31.746027 containerd[1584]: time="2026-04-21T10:31:31.745984706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:31.746631 containerd[1584]: time="2026-04-21T10:31:31.746596103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.594033489s" Apr 21 10:31:31.746656 containerd[1584]: time="2026-04-21T10:31:31.746630250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:31:31.747843 containerd[1584]: time="2026-04-21T10:31:31.747818884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:31:31.750376 containerd[1584]: time="2026-04-21T10:31:31.750308290Z" level=info msg="CreateContainer within sandbox \"fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:31:31.760030 kubelet[2677]: E0421 10:31:31.759989 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:31.760596 kubelet[2677]: E0421 10:31:31.760547 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:31.762395 containerd[1584]: time="2026-04-21T10:31:31.762348844Z" level=info msg="CreateContainer within sandbox \"fa398a70b85242c7222428e2d94ace9afd0d2825d1fc57448798f46abaa88c65\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2dfe6237d57b4dc88a011a5f1d2ba81a7918aef7e4d3a03ea7ca2f322dc05832\"" Apr 21 10:31:31.762689 containerd[1584]: time="2026-04-21T10:31:31.762667532Z" level=info msg="StartContainer for \"2dfe6237d57b4dc88a011a5f1d2ba81a7918aef7e4d3a03ea7ca2f322dc05832\"" Apr 21 10:31:31.824292 containerd[1584]: time="2026-04-21T10:31:31.824244551Z" level=info msg="StartContainer for \"2dfe6237d57b4dc88a011a5f1d2ba81a7918aef7e4d3a03ea7ca2f322dc05832\" returns successfully" Apr 21 10:31:32.763252 kubelet[2677]: E0421 10:31:32.763213 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:31:32.774556 kubelet[2677]: I0421 10:31:32.774498 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-jlm8h" podStartSLOduration=29.178921143 podStartE2EDuration="31.774487801s" podCreationTimestamp="2026-04-21 10:31:01 +0000 UTC" firstStartedPulling="2026-04-21 10:31:29.152159713 +0000 UTC m=+42.695542088" lastFinishedPulling="2026-04-21 10:31:31.747726372 +0000 UTC m=+45.291108746" observedRunningTime="2026-04-21 10:31:32.773184703 +0000 UTC m=+46.316567088" watchObservedRunningTime="2026-04-21 10:31:32.774487801 +0000 UTC m=+46.317870183" Apr 21 10:31:32.784014 systemd[1]: run-containerd-runc-k8s.io-2dfe6237d57b4dc88a011a5f1d2ba81a7918aef7e4d3a03ea7ca2f322dc05832-runc.mcTepa.mount: Deactivated successfully. Apr 21 10:31:35.207693 containerd[1584]: time="2026-04-21T10:31:35.207640570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:35.208537 containerd[1584]: time="2026-04-21T10:31:35.208467133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:31:35.209226 containerd[1584]: time="2026-04-21T10:31:35.209192387Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:35.211188 containerd[1584]: time="2026-04-21T10:31:35.211156299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:35.211978 containerd[1584]: time="2026-04-21T10:31:35.211892285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.464051777s" Apr 21 10:31:35.211978 containerd[1584]: time="2026-04-21T10:31:35.211919097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:31:35.213087 containerd[1584]: time="2026-04-21T10:31:35.213041468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:31:35.216729 containerd[1584]: time="2026-04-21T10:31:35.216699920Z" level=info msg="CreateContainer within sandbox \"d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:31:35.228445 containerd[1584]: time="2026-04-21T10:31:35.228412705Z" level=info msg="CreateContainer within sandbox \"d9cdb21229e35bb5e9f16e34bda7d4dd61f792410a24e63e6b67d0179ff09f23\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70fa5a417da2dca5e5c812e921cc79b35d1dd4e644a754433a34ca4d9e903fe6\"" Apr 21 10:31:35.228798 containerd[1584]: time="2026-04-21T10:31:35.228770808Z" level=info msg="StartContainer for \"70fa5a417da2dca5e5c812e921cc79b35d1dd4e644a754433a34ca4d9e903fe6\"" Apr 21 10:31:35.284631 containerd[1584]: time="2026-04-21T10:31:35.284567253Z" level=info msg="StartContainer for \"70fa5a417da2dca5e5c812e921cc79b35d1dd4e644a754433a34ca4d9e903fe6\" returns successfully" Apr 21 10:31:35.779882 kubelet[2677]: I0421 10:31:35.779818 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-9d4f98f48-rr2sp" podStartSLOduration=28.817595717 podStartE2EDuration="34.779804663s" podCreationTimestamp="2026-04-21 10:31:01 +0000 UTC" firstStartedPulling="2026-04-21 10:31:29.250719421 +0000 UTC m=+42.794101795" lastFinishedPulling="2026-04-21 10:31:35.212928363 +0000 UTC m=+48.756310741" observedRunningTime="2026-04-21 10:31:35.779615103 +0000 UTC m=+49.322997494" watchObservedRunningTime="2026-04-21 10:31:35.779804663 +0000 UTC m=+49.323187049" Apr 21 10:31:35.782887 systemd[1]: run-containerd-runc-k8s.io-70fa5a417da2dca5e5c812e921cc79b35d1dd4e644a754433a34ca4d9e903fe6-runc.WGLYcX.mount: Deactivated successfully. Apr 21 10:31:36.665025 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:33776.service - OpenSSH per-connection server daemon (10.0.0.1:33776). Apr 21 10:31:36.712176 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 33776 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:36.715368 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:36.719030 systemd-logind[1562]: New session 10 of user core. Apr 21 10:31:36.722932 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:31:36.774150 kubelet[2677]: I0421 10:31:36.774101 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:31:36.862851 sshd[5106]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:36.865768 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:33776.service: Deactivated successfully. Apr 21 10:31:36.867378 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:31:36.867475 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:31:36.868177 systemd-logind[1562]: Removed session 10. Apr 21 10:31:38.121606 containerd[1584]: time="2026-04-21T10:31:38.121541729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:38.122334 containerd[1584]: time="2026-04-21T10:31:38.122251487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:31:38.122948 containerd[1584]: time="2026-04-21T10:31:38.122897575Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:38.124772 containerd[1584]: time="2026-04-21T10:31:38.124700588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:38.125181 containerd[1584]: time="2026-04-21T10:31:38.125140786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.912069361s" Apr 21 10:31:38.125181 containerd[1584]: time="2026-04-21T10:31:38.125173460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:31:38.126591 containerd[1584]: time="2026-04-21T10:31:38.126355071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:31:38.136193 containerd[1584]: time="2026-04-21T10:31:38.136167029Z" level=info msg="CreateContainer within sandbox \"4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:31:38.146242 containerd[1584]: time="2026-04-21T10:31:38.146201248Z" level=info msg="CreateContainer within sandbox \"4d168141046aef73461133746384e0c18494eb3cf775b2765daaa93ae99e6763\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d2b3962c04b3fd2f344928477686e103a264f2ae0e08ff93dd0cd55dab0158e7\"" Apr 21 10:31:38.146535 containerd[1584]: time="2026-04-21T10:31:38.146514992Z" level=info msg="StartContainer for \"d2b3962c04b3fd2f344928477686e103a264f2ae0e08ff93dd0cd55dab0158e7\"" Apr 21 10:31:38.247802 containerd[1584]: time="2026-04-21T10:31:38.247770232Z" level=info msg="StartContainer for \"d2b3962c04b3fd2f344928477686e103a264f2ae0e08ff93dd0cd55dab0158e7\" returns successfully" Apr 21 10:31:38.556724 containerd[1584]: time="2026-04-21T10:31:38.556433973Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:38.558054 containerd[1584]: time="2026-04-21T10:31:38.557801498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:31:38.559988 containerd[1584]: time="2026-04-21T10:31:38.559893088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 433.517818ms" Apr 21 10:31:38.559988 containerd[1584]: time="2026-04-21T10:31:38.559952614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:31:38.564007 containerd[1584]: time="2026-04-21T10:31:38.563952821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:31:38.570902 containerd[1584]: time="2026-04-21T10:31:38.570179321Z" level=info msg="CreateContainer within sandbox \"bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:31:38.608017 containerd[1584]: time="2026-04-21T10:31:38.607962271Z" level=info msg="CreateContainer within sandbox \"bab2d606a29b88ab172910a34b1a4c485a28ac0da706895c7915c5d7a6b8852b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e1c7fb80f0b6005cbd618115a37427c92ccd2e4f6f9222f20950c8dcf66db16\"" Apr 21 10:31:38.609239 containerd[1584]: time="2026-04-21T10:31:38.608447487Z" level=info msg="StartContainer for \"3e1c7fb80f0b6005cbd618115a37427c92ccd2e4f6f9222f20950c8dcf66db16\"" Apr 21 10:31:38.677633 containerd[1584]: time="2026-04-21T10:31:38.677599898Z" level=info msg="StartContainer for \"3e1c7fb80f0b6005cbd618115a37427c92ccd2e4f6f9222f20950c8dcf66db16\" returns successfully" Apr 21 10:31:38.793597 kubelet[2677]: I0421 10:31:38.793422 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c66f74b6d-g7cvz" podStartSLOduration=28.060512768 podStartE2EDuration="36.793408778s" podCreationTimestamp="2026-04-21 10:31:02 +0000 UTC" firstStartedPulling="2026-04-21 10:31:29.393332487 +0000 UTC m=+42.936714862" lastFinishedPulling="2026-04-21 10:31:38.126228494 +0000 UTC m=+51.669610872" observedRunningTime="2026-04-21 10:31:38.791541973 +0000 UTC m=+52.334924360" watchObservedRunningTime="2026-04-21 10:31:38.793408778 +0000 UTC m=+52.336791163" Apr 21 10:31:38.801831 kubelet[2677]: I0421 10:31:38.800884 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-9d4f98f48-xl724" podStartSLOduration=28.744856971 podStartE2EDuration="37.800873002s" podCreationTimestamp="2026-04-21 10:31:01 +0000 UTC" firstStartedPulling="2026-04-21 10:31:29.506400162 +0000 UTC m=+43.049782536" lastFinishedPulling="2026-04-21 10:31:38.562416187 +0000 UTC m=+52.105798567" observedRunningTime="2026-04-21 10:31:38.800565977 +0000 UTC m=+52.343948363" watchObservedRunningTime="2026-04-21 10:31:38.800873002 +0000 UTC m=+52.344255386" Apr 21 10:31:39.783249 kubelet[2677]: I0421 10:31:39.783203 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:31:40.371434 containerd[1584]: time="2026-04-21T10:31:40.371394832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:40.372227 containerd[1584]: time="2026-04-21T10:31:40.372187350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:31:40.372978 containerd[1584]: time="2026-04-21T10:31:40.372959789Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:40.374722 containerd[1584]: time="2026-04-21T10:31:40.374684272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:40.375178 containerd[1584]: time="2026-04-21T10:31:40.375125362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.811149828s" Apr 21 10:31:40.375178 containerd[1584]: time="2026-04-21T10:31:40.375168517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:31:40.379763 containerd[1584]: time="2026-04-21T10:31:40.379654108Z" level=info msg="CreateContainer within sandbox \"18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:31:40.399058 containerd[1584]: time="2026-04-21T10:31:40.399006797Z" level=info msg="CreateContainer within sandbox \"18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"15c6798e1f8c3a8f4d0082fe612fd9f7428f8a16565e0d8c8b04729ea04b86a7\"" Apr 21 10:31:40.399742 containerd[1584]: time="2026-04-21T10:31:40.399699517Z" level=info msg="StartContainer for \"15c6798e1f8c3a8f4d0082fe612fd9f7428f8a16565e0d8c8b04729ea04b86a7\"" Apr 21 10:31:40.453402 containerd[1584]: time="2026-04-21T10:31:40.453369564Z" level=info msg="StartContainer for \"15c6798e1f8c3a8f4d0082fe612fd9f7428f8a16565e0d8c8b04729ea04b86a7\" returns successfully" Apr 21 10:31:40.454578 containerd[1584]: time="2026-04-21T10:31:40.454554658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:31:41.872961 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). Apr 21 10:31:41.909589 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:41.910628 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:41.913945 systemd-logind[1562]: New session 11 of user core. Apr 21 10:31:41.917912 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:31:42.176888 sshd[5298]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:42.179452 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:33790.service: Deactivated successfully. Apr 21 10:31:42.180927 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:31:42.180970 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:31:42.181885 systemd-logind[1562]: Removed session 11. Apr 21 10:31:42.543333 containerd[1584]: time="2026-04-21T10:31:42.543293186Z" level=info msg="StopPodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\"" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.584 [INFO][5326] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.585 [INFO][5326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" iface="eth0" netns="/var/run/netns/cni-747fc3e6-4587-e641-4033-2f9992d27376" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.585 [INFO][5326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" iface="eth0" netns="/var/run/netns/cni-747fc3e6-4587-e641-4033-2f9992d27376" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.585 [INFO][5326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" iface="eth0" netns="/var/run/netns/cni-747fc3e6-4587-e641-4033-2f9992d27376" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.585 [INFO][5326] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.585 [INFO][5326] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.625 [INFO][5335] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.625 [INFO][5335] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.625 [INFO][5335] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.631 [WARNING][5335] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.631 [INFO][5335] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.632 [INFO][5335] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:42.635559 containerd[1584]: 2026-04-21 10:31:42.633 [INFO][5326] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:42.635928 containerd[1584]: time="2026-04-21T10:31:42.635705724Z" level=info msg="TearDown network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" successfully" Apr 21 10:31:42.635928 containerd[1584]: time="2026-04-21T10:31:42.635730789Z" level=info msg="StopPodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" returns successfully" Apr 21 10:31:42.638342 systemd[1]: run-netns-cni\x2d747fc3e6\x2d4587\x2de641\x2d4033\x2d2f9992d27376.mount: Deactivated successfully. Apr 21 10:31:42.639060 containerd[1584]: time="2026-04-21T10:31:42.638953028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78vjk,Uid:452bb3ff-3a51-4e73-8032-feb90475c95f,Namespace:calico-system,Attempt:1,}" Apr 21 10:31:42.799951 systemd-networkd[1253]: calida99d1a3f3e: Link UP Apr 21 10:31:42.800705 systemd-networkd[1253]: calida99d1a3f3e: Gained carrier Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.702 [INFO][5346] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--78vjk-eth0 csi-node-driver- calico-system 452bb3ff-3a51-4e73-8032-feb90475c95f 1128 0 2026-04-21 10:31:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-78vjk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calida99d1a3f3e [] [] }} ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.702 [INFO][5346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.730 [INFO][5359] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" HandleID="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.745 [INFO][5359] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" HandleID="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-78vjk", "timestamp":"2026-04-21 10:31:42.730720293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002871e0)} Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.745 [INFO][5359] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.745 [INFO][5359] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.745 [INFO][5359] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.757 [INFO][5359] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.766 [INFO][5359] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.770 [INFO][5359] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.772 [INFO][5359] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.776 [INFO][5359] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.776 [INFO][5359] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.777 [INFO][5359] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54 Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.783 [INFO][5359] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.790 [INFO][5359] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.790 [INFO][5359] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" host="localhost" Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.790 [INFO][5359] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:42.826494 containerd[1584]: 2026-04-21 10:31:42.792 [INFO][5359] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" HandleID="k8s-pod-network.a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.826968 containerd[1584]: 2026-04-21 10:31:42.796 [INFO][5346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--78vjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"452bb3ff-3a51-4e73-8032-feb90475c95f", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-78vjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida99d1a3f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:42.826968 containerd[1584]: 2026-04-21 10:31:42.796 [INFO][5346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.826968 containerd[1584]: 2026-04-21 10:31:42.796 [INFO][5346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida99d1a3f3e ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.826968 containerd[1584]: 2026-04-21 10:31:42.802 [INFO][5346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.826968 containerd[1584]: 2026-04-21 10:31:42.804 [INFO][5346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--78vjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"452bb3ff-3a51-4e73-8032-feb90475c95f", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54", Pod:"csi-node-driver-78vjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida99d1a3f3e", MAC:"4e:20:74:8d:03:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:42.826968 containerd[1584]: 2026-04-21 10:31:42.819 [INFO][5346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54" Namespace="calico-system" Pod="csi-node-driver-78vjk" WorkloadEndpoint="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:42.841266 containerd[1584]: time="2026-04-21T10:31:42.841131618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:31:42.841266 containerd[1584]: time="2026-04-21T10:31:42.841234561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:31:42.841266 containerd[1584]: time="2026-04-21T10:31:42.841246924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:42.841444 containerd[1584]: time="2026-04-21T10:31:42.841371696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:31:42.865573 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:31:42.876864 containerd[1584]: time="2026-04-21T10:31:42.876818048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78vjk,Uid:452bb3ff-3a51-4e73-8032-feb90475c95f,Namespace:calico-system,Attempt:1,} returns sandbox id \"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54\"" Apr 21 10:31:42.917073 containerd[1584]: time="2026-04-21T10:31:42.917023768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:42.917779 containerd[1584]: time="2026-04-21T10:31:42.917708689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:31:42.918579 containerd[1584]: time="2026-04-21T10:31:42.918554142Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:42.921331 containerd[1584]: time="2026-04-21T10:31:42.921281021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:42.921963 containerd[1584]: time="2026-04-21T10:31:42.921940263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.467353348s" Apr 21 10:31:42.922029 containerd[1584]: time="2026-04-21T10:31:42.921969313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:31:42.924897 containerd[1584]: time="2026-04-21T10:31:42.924875731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:31:42.929775 containerd[1584]: time="2026-04-21T10:31:42.929695190Z" level=info msg="CreateContainer within sandbox \"18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:31:42.940931 containerd[1584]: time="2026-04-21T10:31:42.940905606Z" level=info msg="CreateContainer within sandbox \"18a555564fbf8c1f32f4f32ef6dfab053781fc3e0c23321a97e3d1deb535e230\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f39de44423a3215904c2e2d04e8aa33b5b2a21be8e57e1660066f738dc50bfe4\"" Apr 21 10:31:42.941269 containerd[1584]: time="2026-04-21T10:31:42.941245062Z" level=info msg="StartContainer for \"f39de44423a3215904c2e2d04e8aa33b5b2a21be8e57e1660066f738dc50bfe4\"" Apr 21 10:31:42.992035 containerd[1584]: time="2026-04-21T10:31:42.991982527Z" level=info msg="StartContainer for \"f39de44423a3215904c2e2d04e8aa33b5b2a21be8e57e1660066f738dc50bfe4\" returns successfully" Apr 21 10:31:43.641543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127994501.mount: Deactivated successfully. Apr 21 10:31:43.810282 kubelet[2677]: I0421 10:31:43.810230 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bd4bd579f-jtvvr" podStartSLOduration=2.247716209 podStartE2EDuration="14.810215652s" podCreationTimestamp="2026-04-21 10:31:29 +0000 UTC" firstStartedPulling="2026-04-21 10:31:30.362148438 +0000 UTC m=+43.905530811" lastFinishedPulling="2026-04-21 10:31:42.92464788 +0000 UTC m=+56.468030254" observedRunningTime="2026-04-21 10:31:43.809557299 +0000 UTC m=+57.352939684" watchObservedRunningTime="2026-04-21 10:31:43.810215652 +0000 UTC m=+57.353598036" Apr 21 10:31:44.322034 systemd-networkd[1253]: calida99d1a3f3e: Gained IPv6LL Apr 21 10:31:44.690608 containerd[1584]: time="2026-04-21T10:31:44.690547235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:44.691244 containerd[1584]: time="2026-04-21T10:31:44.691129835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:31:44.692034 containerd[1584]: time="2026-04-21T10:31:44.692010257Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:44.694013 containerd[1584]: time="2026-04-21T10:31:44.693981214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:44.694417 containerd[1584]: time="2026-04-21T10:31:44.694378211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.769473874s" Apr 21 10:31:44.694417 containerd[1584]: time="2026-04-21T10:31:44.694407589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:31:44.699109 containerd[1584]: time="2026-04-21T10:31:44.699059884Z" level=info msg="CreateContainer within sandbox \"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:31:44.717064 containerd[1584]: time="2026-04-21T10:31:44.717024497Z" level=info msg="CreateContainer within sandbox \"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e1880f63c5c15a09d9620f78fe92d63961fdca604242d8ead904708d959fb4c\"" Apr 21 10:31:44.717474 containerd[1584]: time="2026-04-21T10:31:44.717426799Z" level=info msg="StartContainer for \"8e1880f63c5c15a09d9620f78fe92d63961fdca604242d8ead904708d959fb4c\"" Apr 21 10:31:44.737542 systemd[1]: run-containerd-runc-k8s.io-8e1880f63c5c15a09d9620f78fe92d63961fdca604242d8ead904708d959fb4c-runc.614ulb.mount: Deactivated successfully. Apr 21 10:31:44.782195 containerd[1584]: time="2026-04-21T10:31:44.782126376Z" level=info msg="StartContainer for \"8e1880f63c5c15a09d9620f78fe92d63961fdca604242d8ead904708d959fb4c\" returns successfully" Apr 21 10:31:44.783444 containerd[1584]: time="2026-04-21T10:31:44.783411862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:31:46.543494 containerd[1584]: time="2026-04-21T10:31:46.543442016Z" level=info msg="StopPodSandbox for \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\"" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.596 [WARNING][5526] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b8131e61-a17e-4cca-839f-e8ca9415fa72", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7", Pod:"coredns-674b8bbfcf-fjjgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7e7ddc1745", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.596 [INFO][5526] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.596 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" iface="eth0" netns="" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.596 [INFO][5526] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.596 [INFO][5526] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.622 [INFO][5538] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.623 [INFO][5538] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.623 [INFO][5538] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.629 [WARNING][5538] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.629 [INFO][5538] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.630 [INFO][5538] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:46.634080 containerd[1584]: 2026-04-21 10:31:46.632 [INFO][5526] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.634080 containerd[1584]: time="2026-04-21T10:31:46.633974110Z" level=info msg="TearDown network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\" successfully" Apr 21 10:31:46.634080 containerd[1584]: time="2026-04-21T10:31:46.633995983Z" level=info msg="StopPodSandbox for \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\" returns successfully" Apr 21 10:31:46.690076 containerd[1584]: time="2026-04-21T10:31:46.689970123Z" level=info msg="RemovePodSandbox for \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\"" Apr 21 10:31:46.693060 containerd[1584]: time="2026-04-21T10:31:46.693027096Z" level=info msg="Forcibly stopping sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\"" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.730 [WARNING][5556] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b8131e61-a17e-4cca-839f-e8ca9415fa72", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d62f981bb59bb608874ac1c6ca9cc0221129d3f21876a3edfb49233ebc4aab7", Pod:"coredns-674b8bbfcf-fjjgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7e7ddc1745", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.730 [INFO][5556] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.730 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" iface="eth0" netns="" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.730 [INFO][5556] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.730 [INFO][5556] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.757 [INFO][5564] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.757 [INFO][5564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.758 [INFO][5564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.765 [WARNING][5564] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.765 [INFO][5564] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" HandleID="k8s-pod-network.2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Workload="localhost-k8s-coredns--674b8bbfcf--fjjgs-eth0" Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.766 [INFO][5564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:46.770598 containerd[1584]: 2026-04-21 10:31:46.768 [INFO][5556] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d" Apr 21 10:31:46.771004 containerd[1584]: time="2026-04-21T10:31:46.770621561Z" level=info msg="TearDown network for sandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\" successfully" Apr 21 10:31:46.784300 containerd[1584]: time="2026-04-21T10:31:46.784248058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:31:46.784448 containerd[1584]: time="2026-04-21T10:31:46.784332578Z" level=info msg="RemovePodSandbox \"2225a02e7e5e1bef8929d4e485fc4a03a31f0e6a1a9cff501f02109b1592c16d\" returns successfully" Apr 21 10:31:46.791552 containerd[1584]: time="2026-04-21T10:31:46.791491202Z" level=info msg="StopPodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\"" Apr 21 10:31:46.791628 containerd[1584]: time="2026-04-21T10:31:46.791594249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:46.792957 containerd[1584]: time="2026-04-21T10:31:46.792918248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:31:46.793969 containerd[1584]: time="2026-04-21T10:31:46.793833660Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:46.796239 containerd[1584]: time="2026-04-21T10:31:46.796212110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:31:46.796863 containerd[1584]: time="2026-04-21T10:31:46.796773157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.013322453s" Apr 21 10:31:46.796863 containerd[1584]: time="2026-04-21T10:31:46.796797361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:31:46.809036 containerd[1584]: time="2026-04-21T10:31:46.808998862Z" level=info msg="CreateContainer within sandbox \"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:31:46.829419 containerd[1584]: time="2026-04-21T10:31:46.829373710Z" level=info msg="CreateContainer within sandbox \"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e074b1e77686acaef99bda80d475747cad55fd1ca23fa7f69be75f7ff391b82\"" Apr 21 10:31:46.830900 containerd[1584]: time="2026-04-21T10:31:46.830881709Z" level=info msg="StartContainer for \"8e074b1e77686acaef99bda80d475747cad55fd1ca23fa7f69be75f7ff391b82\"" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.825 [WARNING][5582] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--78vjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"452bb3ff-3a51-4e73-8032-feb90475c95f", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54", Pod:"csi-node-driver-78vjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida99d1a3f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.825 [INFO][5582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.825 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" iface="eth0" netns="" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.825 [INFO][5582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.825 [INFO][5582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.849 [INFO][5590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.849 [INFO][5590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.849 [INFO][5590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.856 [WARNING][5590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.856 [INFO][5590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.859 [INFO][5590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:46.863132 containerd[1584]: 2026-04-21 10:31:46.861 [INFO][5582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.863526 containerd[1584]: time="2026-04-21T10:31:46.863147245Z" level=info msg="TearDown network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" successfully" Apr 21 10:31:46.863526 containerd[1584]: time="2026-04-21T10:31:46.863179723Z" level=info msg="StopPodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" returns successfully" Apr 21 10:31:46.863526 containerd[1584]: time="2026-04-21T10:31:46.863517517Z" level=info msg="RemovePodSandbox for \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\"" Apr 21 10:31:46.863576 containerd[1584]: time="2026-04-21T10:31:46.863542512Z" level=info msg="Forcibly stopping sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\"" Apr 21 10:31:46.880069 containerd[1584]: time="2026-04-21T10:31:46.880014556Z" level=info msg="StartContainer for \"8e074b1e77686acaef99bda80d475747cad55fd1ca23fa7f69be75f7ff391b82\" returns successfully" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.893 [WARNING][5630] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--78vjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"452bb3ff-3a51-4e73-8032-feb90475c95f", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 31, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a40def0c35a6a2d91c21386c0551259bbaaf9b4e46eace654deae1613fbb0e54", Pod:"csi-node-driver-78vjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida99d1a3f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.893 [INFO][5630] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.893 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" iface="eth0" netns="" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.893 [INFO][5630] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.893 [INFO][5630] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.912 [INFO][5650] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.912 [INFO][5650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.912 [INFO][5650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.917 [WARNING][5650] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.917 [INFO][5650] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" HandleID="k8s-pod-network.41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Workload="localhost-k8s-csi--node--driver--78vjk-eth0" Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.918 [INFO][5650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:31:46.921083 containerd[1584]: 2026-04-21 10:31:46.919 [INFO][5630] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1" Apr 21 10:31:46.921484 containerd[1584]: time="2026-04-21T10:31:46.921123057Z" level=info msg="TearDown network for sandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" successfully" Apr 21 10:31:46.924258 containerd[1584]: time="2026-04-21T10:31:46.924221966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:31:46.924363 containerd[1584]: time="2026-04-21T10:31:46.924280382Z" level=info msg="RemovePodSandbox \"41a385a46c0549bfbbf3c694d7a89278fcbeab25d709b2f43437164c40505de1\" returns successfully" Apr 21 10:31:47.188945 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:58388.service - OpenSSH per-connection server daemon (10.0.0.1:58388). Apr 21 10:31:47.225773 sshd[5657]: Accepted publickey for core from 10.0.0.1 port 58388 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:47.227438 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:47.230888 systemd-logind[1562]: New session 12 of user core. Apr 21 10:31:47.236920 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:31:47.436402 sshd[5657]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:47.438929 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:58388.service: Deactivated successfully. Apr 21 10:31:47.440870 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:31:47.440881 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:31:47.442055 systemd-logind[1562]: Removed session 12. Apr 21 10:31:47.665570 kubelet[2677]: I0421 10:31:47.665507 2677 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:31:47.667062 kubelet[2677]: I0421 10:31:47.667036 2677 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:31:47.830699 kubelet[2677]: I0421 10:31:47.830412 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-78vjk" podStartSLOduration=41.907782295 podStartE2EDuration="45.830397349s" podCreationTimestamp="2026-04-21 10:31:02 +0000 UTC" firstStartedPulling="2026-04-21 10:31:42.877710917 +0000 UTC m=+56.421093292" lastFinishedPulling="2026-04-21 10:31:46.800325972 +0000 UTC m=+60.343708346" observedRunningTime="2026-04-21 10:31:47.829205593 +0000 UTC m=+61.372587975" watchObservedRunningTime="2026-04-21 10:31:47.830397349 +0000 UTC m=+61.373779732" Apr 21 10:31:52.446952 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:58392.service - OpenSSH per-connection server daemon (10.0.0.1:58392). Apr 21 10:31:52.478442 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:52.479982 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:52.483823 systemd-logind[1562]: New session 13 of user core. Apr 21 10:31:52.489166 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:31:52.596487 sshd[5709]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:52.603982 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:58398.service - OpenSSH per-connection server daemon (10.0.0.1:58398). Apr 21 10:31:52.604337 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:58392.service: Deactivated successfully. Apr 21 10:31:52.605644 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:31:52.607247 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:31:52.608351 systemd-logind[1562]: Removed session 13. Apr 21 10:31:52.635655 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 58398 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:52.636929 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:52.640015 systemd-logind[1562]: New session 14 of user core. Apr 21 10:31:52.647957 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:31:52.800061 sshd[5724]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:52.810259 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:58400.service - OpenSSH per-connection server daemon (10.0.0.1:58400). Apr 21 10:31:52.810554 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:58398.service: Deactivated successfully. Apr 21 10:31:52.814536 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:31:52.815530 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:31:52.816790 systemd-logind[1562]: Removed session 14. Apr 21 10:31:52.844234 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 58400 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:52.845374 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:52.848656 systemd-logind[1562]: New session 15 of user core. Apr 21 10:31:52.853956 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:31:52.951654 sshd[5737]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:52.954157 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:58400.service: Deactivated successfully. Apr 21 10:31:52.955839 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:31:52.955892 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:31:52.956651 systemd-logind[1562]: Removed session 15. Apr 21 10:31:57.965954 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:32816.service - OpenSSH per-connection server daemon (10.0.0.1:32816). Apr 21 10:31:57.996447 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 32816 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:57.997539 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:58.000853 systemd-logind[1562]: New session 16 of user core. Apr 21 10:31:58.005024 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:31:58.105443 sshd[5764]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:58.117234 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:32822.service - OpenSSH per-connection server daemon (10.0.0.1:32822). Apr 21 10:31:58.117846 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:32816.service: Deactivated successfully. Apr 21 10:31:58.119456 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:31:58.121033 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:31:58.121976 systemd-logind[1562]: Removed session 16. Apr 21 10:31:58.147716 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 32822 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:58.148822 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:58.152003 systemd-logind[1562]: New session 17 of user core. Apr 21 10:31:58.158964 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:31:58.438243 sshd[5778]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:58.443970 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:32838.service - OpenSSH per-connection server daemon (10.0.0.1:32838). Apr 21 10:31:58.444385 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:32822.service: Deactivated successfully. Apr 21 10:31:58.446636 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:31:58.447148 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:31:58.447934 systemd-logind[1562]: Removed session 17. Apr 21 10:31:58.476079 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 32838 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:58.477119 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:58.480214 systemd-logind[1562]: New session 18 of user core. Apr 21 10:31:58.487923 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:31:58.912656 sshd[5790]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:58.920321 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:32842.service - OpenSSH per-connection server daemon (10.0.0.1:32842). Apr 21 10:31:58.920886 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:32838.service: Deactivated successfully. Apr 21 10:31:58.925418 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:31:58.927858 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:31:58.929858 systemd-logind[1562]: Removed session 18. Apr 21 10:31:58.971530 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 32842 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:58.974390 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:58.987730 systemd-logind[1562]: New session 19 of user core. Apr 21 10:31:58.992958 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:31:59.282053 sshd[5820]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:59.292049 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:32848.service - OpenSSH per-connection server daemon (10.0.0.1:32848). Apr 21 10:31:59.292927 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:32842.service: Deactivated successfully. Apr 21 10:31:59.295011 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:31:59.296457 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:31:59.297897 systemd-logind[1562]: Removed session 19. Apr 21 10:31:59.328288 sshd[5835]: Accepted publickey for core from 10.0.0.1 port 32848 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:31:59.329461 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:31:59.332768 systemd-logind[1562]: New session 20 of user core. Apr 21 10:31:59.336953 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:31:59.438430 sshd[5835]: pam_unix(sshd:session): session closed for user core Apr 21 10:31:59.440918 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:32848.service: Deactivated successfully. Apr 21 10:31:59.442492 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:31:59.442679 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:31:59.443609 systemd-logind[1562]: Removed session 20. Apr 21 10:32:04.450990 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:32862.service - OpenSSH per-connection server daemon (10.0.0.1:32862). Apr 21 10:32:04.485105 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 32862 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:32:04.486668 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:32:04.489924 systemd-logind[1562]: New session 21 of user core. Apr 21 10:32:04.493916 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:32:04.622298 sshd[5886]: pam_unix(sshd:session): session closed for user core Apr 21 10:32:04.625310 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:32862.service: Deactivated successfully. Apr 21 10:32:04.626956 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:32:04.627020 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:32:04.627695 systemd-logind[1562]: Removed session 21. Apr 21 10:32:04.755136 kubelet[2677]: I0421 10:32:04.754978 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:32:05.537118 kubelet[2677]: E0421 10:32:05.537085 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:32:09.629948 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:43514.service - OpenSSH per-connection server daemon (10.0.0.1:43514). Apr 21 10:32:09.659756 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 43514 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:32:09.660732 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:32:09.663906 systemd-logind[1562]: New session 22 of user core. Apr 21 10:32:09.668939 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:32:09.788118 sshd[5948]: pam_unix(sshd:session): session closed for user core Apr 21 10:32:09.790730 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:43514.service: Deactivated successfully. Apr 21 10:32:09.792252 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:32:09.792323 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:32:09.793341 systemd-logind[1562]: Removed session 22.