Oct 29 23:59:27.209380 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:08:54 -00 2025 Oct 29 23:59:27.209411 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 29 23:59:27.209427 kernel: BIOS-provided physical RAM map: Oct 29 23:59:27.209437 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 29 23:59:27.209446 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 29 23:59:27.209455 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 29 23:59:27.209466 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 29 23:59:27.209477 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 29 23:59:27.209492 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 29 23:59:27.209501 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 29 23:59:27.209514 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 29 23:59:27.209523 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 29 23:59:27.209532 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 29 23:59:27.209542 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 29 23:59:27.209554 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 29 23:59:27.209567 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 29 23:59:27.209581 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 29 23:59:27.209591 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 29 23:59:27.209601 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 29 23:59:27.209611 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 29 23:59:27.209621 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 29 23:59:27.209631 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 29 23:59:27.209641 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 23:59:27.209650 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 23:59:27.209660 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 29 23:59:27.209674 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 23:59:27.209684 kernel: NX (Execute Disable) protection: active Oct 29 23:59:27.209705 kernel: APIC: Static calls initialized Oct 29 23:59:27.209715 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Oct 29 23:59:27.209726 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Oct 29 23:59:27.209737 kernel: extended physical RAM map: Oct 29 23:59:27.209747 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 29 23:59:27.209757 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 29 23:59:27.209767 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 29 23:59:27.209778 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 29 23:59:27.209787 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 29 23:59:27.209802 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 29 23:59:27.209812 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 29 23:59:27.209822 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Oct 29 23:59:27.209833 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Oct 29 23:59:27.209849 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Oct 29 23:59:27.209862 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Oct 29 23:59:27.209872 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Oct 29 23:59:27.209883 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 29 23:59:27.209894 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 29 23:59:27.209904 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 29 23:59:27.209915 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 29 23:59:27.209925 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 29 23:59:27.209936 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 29 23:59:27.209950 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 29 23:59:27.209961 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 29 23:59:27.209972 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 29 23:59:27.209982 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 29 23:59:27.209993 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 29 23:59:27.210003 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 23:59:27.210014 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 23:59:27.210024 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 29 23:59:27.210035 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 23:59:27.210050 kernel: efi: EFI v2.7 by EDK II Oct 29 23:59:27.210061 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 29 23:59:27.210075 kernel: random: crng init done Oct 29 23:59:27.210089 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 29 23:59:27.210128 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 29 23:59:27.210141 kernel: secureboot: Secure boot disabled Oct 29 23:59:27.210152 kernel: SMBIOS 2.8 present. Oct 29 23:59:27.210163 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 29 23:59:27.210174 kernel: DMI: Memory slots populated: 1/1 Oct 29 23:59:27.210184 kernel: Hypervisor detected: KVM Oct 29 23:59:27.210195 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 29 23:59:27.210205 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 23:59:27.210216 kernel: kvm-clock: using sched offset of 5138986249 cycles Oct 29 23:59:27.210231 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 23:59:27.210243 kernel: tsc: Detected 2794.748 MHz processor Oct 29 23:59:27.210254 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 23:59:27.210265 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 23:59:27.210276 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 29 23:59:27.210287 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 29 23:59:27.210299 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 23:59:27.210311 kernel: Using GB pages for direct mapping Oct 29 23:59:27.210327 kernel: ACPI: Early table checksum verification disabled Oct 29 23:59:27.210338 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 29 23:59:27.210349 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 29 23:59:27.210361 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:59:27.210372 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:59:27.210383 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 29 23:59:27.210394 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:59:27.210410 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:59:27.210421 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:59:27.210433 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:59:27.210445 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 29 23:59:27.210456 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 29 23:59:27.210468 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 29 23:59:27.210478 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 29 23:59:27.210494 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 29 23:59:27.210505 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 29 23:59:27.210516 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 29 23:59:27.210527 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 29 23:59:27.210539 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 29 23:59:27.210550 kernel: No NUMA configuration found Oct 29 23:59:27.210562 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 29 23:59:27.210577 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 29 23:59:27.210588 kernel: Zone ranges: Oct 29 23:59:27.210599 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 23:59:27.210611 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 29 23:59:27.210622 kernel: Normal empty Oct 29 23:59:27.210633 kernel: Device empty Oct 29 23:59:27.210644 kernel: Movable zone start for each node Oct 29 23:59:27.210655 kernel: Early memory node ranges Oct 29 23:59:27.210671 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 29 23:59:27.210686 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 29 23:59:27.210708 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 29 23:59:27.210720 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 29 23:59:27.210731 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 29 23:59:27.210742 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 29 23:59:27.210754 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 29 23:59:27.210769 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 29 23:59:27.210785 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 29 23:59:27.210797 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 23:59:27.210819 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 29 23:59:27.210834 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 29 23:59:27.210846 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 23:59:27.210857 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 29 23:59:27.210869 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 29 23:59:27.210882 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 29 23:59:27.210893 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 29 23:59:27.210910 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 29 23:59:27.210922 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 23:59:27.210934 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 23:59:27.210945 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 23:59:27.210961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 23:59:27.210972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 23:59:27.210984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 23:59:27.210996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 23:59:27.211008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 23:59:27.211019 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 23:59:27.211031 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 23:59:27.211047 kernel: TSC deadline timer available Oct 29 23:59:27.211058 kernel: CPU topo: Max. logical packages: 1 Oct 29 23:59:27.211070 kernel: CPU topo: Max. logical dies: 1 Oct 29 23:59:27.211081 kernel: CPU topo: Max. dies per package: 1 Oct 29 23:59:27.211110 kernel: CPU topo: Max. threads per core: 1 Oct 29 23:59:27.211124 kernel: CPU topo: Num. cores per package: 4 Oct 29 23:59:27.211136 kernel: CPU topo: Num. threads per package: 4 Oct 29 23:59:27.211152 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 23:59:27.211164 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 23:59:27.211176 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 23:59:27.211188 kernel: kvm-guest: setup PV sched yield Oct 29 23:59:27.211199 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 29 23:59:27.211211 kernel: Booting paravirtualized kernel on KVM Oct 29 23:59:27.211223 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 23:59:27.211239 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 23:59:27.211251 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 23:59:27.211263 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 23:59:27.211274 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 23:59:27.211285 kernel: kvm-guest: PV spinlocks enabled Oct 29 23:59:27.211296 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 23:59:27.211313 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 29 23:59:27.211328 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 23:59:27.211339 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 23:59:27.211350 kernel: Fallback order for Node 0: 0 Oct 29 23:59:27.211361 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 29 23:59:27.211372 kernel: Policy zone: DMA32 Oct 29 23:59:27.211384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 23:59:27.211396 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 23:59:27.211411 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 23:59:27.211423 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 23:59:27.211435 kernel: Dynamic Preempt: voluntary Oct 29 23:59:27.211447 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 23:59:27.211460 kernel: rcu: RCU event tracing is enabled. Oct 29 23:59:27.211472 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 23:59:27.211484 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 23:59:27.211500 kernel: Rude variant of Tasks RCU enabled. Oct 29 23:59:27.211512 kernel: Tracing variant of Tasks RCU enabled. Oct 29 23:59:27.211524 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 23:59:27.211536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 23:59:27.211552 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:59:27.211564 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:59:27.211577 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:59:27.211593 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 23:59:27.211605 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 23:59:27.211617 kernel: Console: colour dummy device 80x25 Oct 29 23:59:27.211630 kernel: printk: legacy console [ttyS0] enabled Oct 29 23:59:27.211642 kernel: ACPI: Core revision 20240827 Oct 29 23:59:27.211654 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 23:59:27.211666 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 23:59:27.211678 kernel: x2apic enabled Oct 29 23:59:27.211704 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 23:59:27.211717 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 23:59:27.211729 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 23:59:27.211740 kernel: kvm-guest: setup PV IPIs Oct 29 23:59:27.211752 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 23:59:27.211764 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 23:59:27.211776 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 23:59:27.211793 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 23:59:27.211805 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 23:59:27.211817 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 23:59:27.211829 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 23:59:27.211841 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 23:59:27.211853 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 23:59:27.211865 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 23:59:27.211881 kernel: active return thunk: retbleed_return_thunk Oct 29 23:59:27.211892 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 23:59:27.211908 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 23:59:27.211920 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 23:59:27.211932 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 23:59:27.211945 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 23:59:27.211961 kernel: active return thunk: srso_return_thunk Oct 29 23:59:27.211973 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 23:59:27.211985 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 23:59:27.211997 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 23:59:27.212009 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 23:59:27.212021 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 23:59:27.212033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 23:59:27.212048 kernel: Freeing SMP alternatives memory: 32K Oct 29 23:59:27.212060 kernel: pid_max: default: 32768 minimum: 301 Oct 29 23:59:27.212072 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 23:59:27.212084 kernel: landlock: Up and running. Oct 29 23:59:27.212115 kernel: SELinux: Initializing. Oct 29 23:59:27.212127 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:59:27.212140 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:59:27.212156 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 23:59:27.212168 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 23:59:27.212180 kernel: ... version: 0 Oct 29 23:59:27.212192 kernel: ... bit width: 48 Oct 29 23:59:27.212204 kernel: ... generic registers: 6 Oct 29 23:59:27.212216 kernel: ... value mask: 0000ffffffffffff Oct 29 23:59:27.212228 kernel: ... max period: 00007fffffffffff Oct 29 23:59:27.212239 kernel: ... fixed-purpose events: 0 Oct 29 23:59:27.212254 kernel: ... event mask: 000000000000003f Oct 29 23:59:27.212266 kernel: signal: max sigframe size: 1776 Oct 29 23:59:27.212278 kernel: rcu: Hierarchical SRCU implementation. Oct 29 23:59:27.212290 kernel: rcu: Max phase no-delay instances is 400. Oct 29 23:59:27.212307 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 23:59:27.212320 kernel: smp: Bringing up secondary CPUs ... Oct 29 23:59:27.212332 kernel: smpboot: x86: Booting SMP configuration: Oct 29 23:59:27.212347 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 23:59:27.212359 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 23:59:27.212371 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 23:59:27.212384 kernel: Memory: 2445192K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15956K init, 2088K bss, 114672K reserved, 0K cma-reserved) Oct 29 23:59:27.212395 kernel: devtmpfs: initialized Oct 29 23:59:27.212407 kernel: x86/mm: Memory block size: 128MB Oct 29 23:59:27.212419 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 29 23:59:27.212435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 29 23:59:27.212447 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 29 23:59:27.212459 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 29 23:59:27.212471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 29 23:59:27.212483 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 29 23:59:27.212495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 23:59:27.212507 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 23:59:27.212522 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 23:59:27.212534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 23:59:27.212545 kernel: audit: initializing netlink subsys (disabled) Oct 29 23:59:27.212557 kernel: audit: type=2000 audit(1761782363.816:1): state=initialized audit_enabled=0 res=1 Oct 29 23:59:27.212569 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 23:59:27.212581 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 23:59:27.212592 kernel: cpuidle: using governor menu Oct 29 23:59:27.212606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 23:59:27.212618 kernel: dca service started, version 1.12.1 Oct 29 23:59:27.212629 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 29 23:59:27.212641 kernel: PCI: Using configuration type 1 for base access Oct 29 23:59:27.212653 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 23:59:27.212664 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 23:59:27.212675 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 23:59:27.212689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 23:59:27.212712 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 23:59:27.212723 kernel: ACPI: Added _OSI(Module Device) Oct 29 23:59:27.212734 kernel: ACPI: Added _OSI(Processor Device) Oct 29 23:59:27.212745 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 23:59:27.212756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 23:59:27.212768 kernel: ACPI: Interpreter enabled Oct 29 23:59:27.212782 kernel: ACPI: PM: (supports S0 S3 S5) Oct 29 23:59:27.212793 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 23:59:27.212804 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 23:59:27.212815 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 23:59:27.212826 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 23:59:27.212837 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 23:59:27.213170 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 23:59:27.213402 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 23:59:27.213614 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 23:59:27.213629 kernel: PCI host bridge to bus 0000:00 Oct 29 23:59:27.213850 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 23:59:27.214049 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 23:59:27.214357 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 23:59:27.214559 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 29 23:59:27.214736 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 29 23:59:27.214898 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 29 23:59:27.215079 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 23:59:27.215336 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 23:59:27.215564 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 23:59:27.215787 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 29 23:59:27.215987 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 29 23:59:27.216186 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 29 23:59:27.216375 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 23:59:27.216573 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 23:59:27.216802 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 29 23:59:27.216988 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 29 23:59:27.217185 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 29 23:59:27.217431 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 23:59:27.217656 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 29 23:59:27.217855 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 29 23:59:27.218388 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 29 23:59:27.218601 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 23:59:27.218810 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 29 23:59:27.218988 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 29 23:59:27.219200 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 29 23:59:27.219406 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 29 23:59:27.219628 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 23:59:27.219865 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 23:59:27.220134 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 23:59:27.220376 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 29 23:59:27.220588 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 29 23:59:27.220790 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 23:59:27.220964 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 29 23:59:27.220977 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 23:59:27.220986 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 23:59:27.220995 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 23:59:27.221008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 23:59:27.221016 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 23:59:27.221025 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 23:59:27.221033 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 23:59:27.221042 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 23:59:27.221050 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 23:59:27.221058 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 23:59:27.221069 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 23:59:27.221078 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 23:59:27.221086 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 23:59:27.221138 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 23:59:27.221148 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 23:59:27.221157 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 23:59:27.221165 kernel: iommu: Default domain type: Translated Oct 29 23:59:27.221177 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 23:59:27.221185 kernel: efivars: Registered efivars operations Oct 29 23:59:27.221194 kernel: PCI: Using ACPI for IRQ routing Oct 29 23:59:27.221203 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 23:59:27.221212 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 29 23:59:27.221220 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 29 23:59:27.221229 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Oct 29 23:59:27.221237 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Oct 29 23:59:27.221247 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 29 23:59:27.221256 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 29 23:59:27.221266 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 29 23:59:27.221277 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 29 23:59:27.221477 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 23:59:27.221653 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 23:59:27.221842 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 23:59:27.221854 kernel: vgaarb: loaded Oct 29 23:59:27.221863 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 23:59:27.221872 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 23:59:27.221881 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 23:59:27.221889 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 23:59:27.221898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 23:59:27.221909 kernel: pnp: PnP ACPI init Oct 29 23:59:27.222130 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 29 23:59:27.222148 kernel: pnp: PnP ACPI: found 6 devices Oct 29 23:59:27.222157 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 23:59:27.222166 kernel: NET: Registered PF_INET protocol family Oct 29 23:59:27.222175 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 23:59:27.222184 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 23:59:27.222195 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 23:59:27.222204 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 23:59:27.222213 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 23:59:27.222222 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 23:59:27.222231 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:59:27.222241 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:59:27.222250 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 23:59:27.222262 kernel: NET: Registered PF_XDP protocol family Oct 29 23:59:27.222448 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 29 23:59:27.222628 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 29 23:59:27.222806 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 23:59:27.222970 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 23:59:27.223149 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 23:59:27.223330 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 29 23:59:27.223498 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 29 23:59:27.225145 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 29 23:59:27.225194 kernel: PCI: CLS 0 bytes, default 64 Oct 29 23:59:27.225206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 23:59:27.225224 kernel: Initialise system trusted keyrings Oct 29 23:59:27.225235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 23:59:27.225244 kernel: Key type asymmetric registered Oct 29 23:59:27.225253 kernel: Asymmetric key parser 'x509' registered Oct 29 23:59:27.225263 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 23:59:27.225272 kernel: io scheduler mq-deadline registered Oct 29 23:59:27.225284 kernel: io scheduler kyber registered Oct 29 23:59:27.225293 kernel: io scheduler bfq registered Oct 29 23:59:27.225303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 23:59:27.225314 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 23:59:27.225326 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 23:59:27.225336 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 23:59:27.225345 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 23:59:27.225356 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 23:59:27.225366 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 23:59:27.225375 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 23:59:27.225384 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 23:59:27.225626 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 23:59:27.225642 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 23:59:27.225837 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 23:59:27.228216 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T23:59:25 UTC (1761782365) Oct 29 23:59:27.228403 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 29 23:59:27.228417 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 23:59:27.228429 kernel: efifb: probing for efifb Oct 29 23:59:27.228439 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 29 23:59:27.228450 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 29 23:59:27.228461 kernel: efifb: scrolling: redraw Oct 29 23:59:27.228477 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 29 23:59:27.228488 kernel: Console: switching to colour frame buffer device 160x50 Oct 29 23:59:27.228499 kernel: fb0: EFI VGA frame buffer device Oct 29 23:59:27.228510 kernel: pstore: Using crash dump compression: deflate Oct 29 23:59:27.228521 kernel: pstore: Registered efi_pstore as persistent store backend Oct 29 23:59:27.228532 kernel: NET: Registered PF_INET6 protocol family Oct 29 23:59:27.228543 kernel: Segment Routing with IPv6 Oct 29 23:59:27.228556 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 23:59:27.228567 kernel: NET: Registered PF_PACKET protocol family Oct 29 23:59:27.228578 kernel: Key type dns_resolver registered Oct 29 23:59:27.228589 kernel: IPI shorthand broadcast: enabled Oct 29 23:59:27.228600 kernel: sched_clock: Marking stable (1688006109, 305411819)->(2058533215, -65115287) Oct 29 23:59:27.228611 kernel: registered taskstats version 1 Oct 29 23:59:27.228621 kernel: Loading compiled-in X.509 certificates Oct 29 23:59:27.228635 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: b5a3367ee15a1313a0db8339b653e9e56c1bb8d0' Oct 29 23:59:27.228645 kernel: Demotion targets for Node 0: null Oct 29 23:59:27.228656 kernel: Key type .fscrypt registered Oct 29 23:59:27.228666 kernel: Key type fscrypt-provisioning registered Oct 29 23:59:27.228677 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 23:59:27.228687 kernel: ima: Allocated hash algorithm: sha1 Oct 29 23:59:27.228710 kernel: ima: No architecture policies found Oct 29 23:59:27.228724 kernel: clk: Disabling unused clocks Oct 29 23:59:27.228734 kernel: Freeing unused kernel image (initmem) memory: 15956K Oct 29 23:59:27.228745 kernel: Write protecting the kernel read-only data: 40960k Oct 29 23:59:27.228756 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 23:59:27.228767 kernel: Run /init as init process Oct 29 23:59:27.228778 kernel: with arguments: Oct 29 23:59:27.228789 kernel: /init Oct 29 23:59:27.228800 kernel: with environment: Oct 29 23:59:27.228813 kernel: HOME=/ Oct 29 23:59:27.228823 kernel: TERM=linux Oct 29 23:59:27.228834 kernel: SCSI subsystem initialized Oct 29 23:59:27.228845 kernel: libata version 3.00 loaded. Oct 29 23:59:27.229050 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 23:59:27.229067 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 23:59:27.229276 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 23:59:27.229473 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 23:59:27.229659 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 23:59:27.229887 kernel: scsi host0: ahci Oct 29 23:59:27.232040 kernel: scsi host1: ahci Oct 29 23:59:27.232341 kernel: scsi host2: ahci Oct 29 23:59:27.232585 kernel: scsi host3: ahci Oct 29 23:59:27.232804 kernel: scsi host4: ahci Oct 29 23:59:27.232995 kernel: scsi host5: ahci Oct 29 23:59:27.233010 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 29 23:59:27.233020 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 29 23:59:27.233029 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 29 23:59:27.233042 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 29 23:59:27.233051 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 29 23:59:27.233060 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 29 23:59:27.233070 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 23:59:27.233080 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 23:59:27.233089 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 23:59:27.233114 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 23:59:27.233125 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 23:59:27.233134 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 23:59:27.233144 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 23:59:27.233159 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 23:59:27.233175 kernel: ata3.00: applying bridge limits Oct 29 23:59:27.233187 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 23:59:27.233199 kernel: ata3.00: configured for UDMA/100 Oct 29 23:59:27.233455 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 23:59:27.233676 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 23:59:27.233886 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 23:59:27.233900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 23:59:27.233912 kernel: GPT:16515071 != 27000831 Oct 29 23:59:27.233922 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 23:59:27.233937 kernel: GPT:16515071 != 27000831 Oct 29 23:59:27.233947 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 23:59:27.233958 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 23:59:27.234179 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 23:59:27.234194 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 23:59:27.234396 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 23:59:27.234411 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 23:59:27.234426 kernel: device-mapper: uevent: version 1.0.3 Oct 29 23:59:27.234437 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 23:59:27.234447 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 23:59:27.234459 kernel: raid6: avx2x4 gen() 24543 MB/s Oct 29 23:59:27.234469 kernel: raid6: avx2x2 gen() 28987 MB/s Oct 29 23:59:27.234480 kernel: raid6: avx2x1 gen() 24823 MB/s Oct 29 23:59:27.234491 kernel: raid6: using algorithm avx2x2 gen() 28987 MB/s Oct 29 23:59:27.234504 kernel: raid6: .... xor() 18539 MB/s, rmw enabled Oct 29 23:59:27.234515 kernel: raid6: using avx2x2 recovery algorithm Oct 29 23:59:27.234525 kernel: xor: automatically using best checksumming function avx Oct 29 23:59:27.234536 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 23:59:27.234548 kernel: BTRFS: device fsid 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (180) Oct 29 23:59:27.234559 kernel: BTRFS info (device dm-0): first mount of filesystem 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe Oct 29 23:59:27.234569 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:27.234583 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 23:59:27.234593 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 23:59:27.234605 kernel: loop: module loaded Oct 29 23:59:27.234615 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 23:59:27.234626 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 23:59:27.234638 systemd[1]: Successfully made /usr/ read-only. Oct 29 23:59:27.234657 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:59:27.234670 systemd[1]: Detected virtualization kvm. Oct 29 23:59:27.234681 systemd[1]: Detected architecture x86-64. Oct 29 23:59:27.234691 systemd[1]: Running in initrd. Oct 29 23:59:27.234712 systemd[1]: No hostname configured, using default hostname. Oct 29 23:59:27.234723 systemd[1]: Hostname set to . Oct 29 23:59:27.234737 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 23:59:27.234748 systemd[1]: Queued start job for default target initrd.target. Oct 29 23:59:27.234760 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:59:27.234771 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:59:27.234783 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:59:27.234795 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 23:59:27.234806 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:59:27.234821 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 23:59:27.234832 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 23:59:27.234846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:59:27.234857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:59:27.234869 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:59:27.234882 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:59:27.234893 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:59:27.234905 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:59:27.234916 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:59:27.234927 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:59:27.234938 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:59:27.234949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 23:59:27.234963 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 23:59:27.234974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:59:27.234986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:59:27.234997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:59:27.235008 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:59:27.235020 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 23:59:27.235032 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 23:59:27.235046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:59:27.235057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 23:59:27.235069 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 23:59:27.235081 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 23:59:27.235092 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:59:27.235116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:59:27.235127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:27.235142 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 23:59:27.235153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:59:27.235165 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 23:59:27.235179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 23:59:27.329944 systemd-journald[316]: Collecting audit messages is disabled. Oct 29 23:59:27.330019 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:59:27.330045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:59:27.330059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:27.330074 systemd-journald[316]: Journal started Oct 29 23:59:27.330117 systemd-journald[316]: Runtime Journal (/run/log/journal/5e1f703b74b74c2f9a5a2333c39dbeff) is 6M, max 48.1M, 42.1M free. Oct 29 23:59:27.337135 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 23:59:27.337177 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:59:27.341821 systemd-modules-load[317]: Inserted module 'br_netfilter' Oct 29 23:59:27.343816 kernel: Bridge firewalling registered Oct 29 23:59:27.343917 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:59:27.347350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 23:59:27.351223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:59:27.356380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:59:27.369515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:59:27.382305 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 23:59:27.390876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:59:27.392874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:59:27.393783 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:59:27.403958 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 23:59:27.406224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:59:27.458015 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 29 23:59:27.500000 systemd-resolved[357]: Positive Trust Anchors: Oct 29 23:59:27.500018 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:59:27.500024 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 23:59:27.500064 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:59:27.530146 systemd-resolved[357]: Defaulting to hostname 'linux'. Oct 29 23:59:27.532120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:59:27.532880 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:59:27.608157 kernel: Loading iSCSI transport class v2.0-870. Oct 29 23:59:27.622134 kernel: iscsi: registered transport (tcp) Oct 29 23:59:27.669157 kernel: iscsi: registered transport (qla4xxx) Oct 29 23:59:27.669243 kernel: QLogic iSCSI HBA Driver Oct 29 23:59:27.700496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:59:27.728068 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:59:27.730705 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:59:27.798631 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 23:59:27.812301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 23:59:27.816464 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 23:59:27.900081 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:59:27.902612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:59:27.952157 systemd-udevd[595]: Using default interface naming scheme 'v257'. Oct 29 23:59:27.967722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:59:27.978727 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 23:59:28.010795 dracut-pre-trigger[660]: rd.md=0: removing MD RAID activation Oct 29 23:59:28.048274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:59:28.063399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:59:28.084241 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:59:28.086450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:59:28.153277 systemd-networkd[726]: lo: Link UP Oct 29 23:59:28.153288 systemd-networkd[726]: lo: Gained carrier Oct 29 23:59:28.154107 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:59:28.155507 systemd[1]: Reached target network.target - Network. Oct 29 23:59:28.205364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:59:28.209773 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 23:59:28.265158 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 23:59:28.293243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 23:59:28.303979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 23:59:28.324956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 23:59:28.332000 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 23:59:28.329775 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 23:59:28.344647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:59:28.344836 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:28.347013 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:28.354529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:28.370583 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 23:59:28.370643 kernel: AES CTR mode by8 optimization enabled Oct 29 23:59:28.386386 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:59:28.386399 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:59:28.386856 systemd-networkd[726]: eth0: Link UP Oct 29 23:59:28.389909 systemd-networkd[726]: eth0: Gained carrier Oct 29 23:59:28.389926 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:59:28.404760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:28.416219 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 23:59:28.574483 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 23:59:28.600702 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:59:28.603489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:59:28.605405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:59:28.610716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 23:59:28.641961 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:59:28.645637 disk-uuid[776]: Primary Header is updated. Oct 29 23:59:28.645637 disk-uuid[776]: Secondary Entries is updated. Oct 29 23:59:28.645637 disk-uuid[776]: Secondary Header is updated. Oct 29 23:59:29.694526 disk-uuid[857]: Warning: The kernel is still using the old partition table. Oct 29 23:59:29.694526 disk-uuid[857]: The new table will be used at the next reboot or after you Oct 29 23:59:29.694526 disk-uuid[857]: run partprobe(8) or kpartx(8) Oct 29 23:59:29.694526 disk-uuid[857]: The operation has completed successfully. Oct 29 23:59:29.707915 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 23:59:29.708143 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 23:59:29.713803 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 23:59:29.760208 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Oct 29 23:59:29.760266 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:29.760286 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:29.765339 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:59:29.765365 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:59:29.774122 kernel: BTRFS info (device vda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:29.774712 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 23:59:29.779223 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 23:59:29.911688 systemd-networkd[726]: eth0: Gained IPv6LL Oct 29 23:59:30.092121 ignition[886]: Ignition 2.22.0 Oct 29 23:59:30.092137 ignition[886]: Stage: fetch-offline Oct 29 23:59:30.092189 ignition[886]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:30.092201 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:59:30.092317 ignition[886]: parsed url from cmdline: "" Oct 29 23:59:30.092321 ignition[886]: no config URL provided Oct 29 23:59:30.092327 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 23:59:30.092338 ignition[886]: no config at "/usr/lib/ignition/user.ign" Oct 29 23:59:30.092388 ignition[886]: op(1): [started] loading QEMU firmware config module Oct 29 23:59:30.092394 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 23:59:30.105125 ignition[886]: op(1): [finished] loading QEMU firmware config module Oct 29 23:59:30.193419 ignition[886]: parsing config with SHA512: 8ab7619cd6e3bf5e44d8d45a4bd73f9c604cd00a8166dda5bc9cff2f587ff229032bdb2093b3bc2fc7ba405ffbd4be798274a444b436e3209947672e96cfab6b Oct 29 23:59:30.203624 unknown[886]: fetched base config from "system" Oct 29 23:59:30.203638 unknown[886]: fetched user config from "qemu" Oct 29 23:59:30.204073 ignition[886]: fetch-offline: fetch-offline passed Oct 29 23:59:30.204203 ignition[886]: Ignition finished successfully Oct 29 23:59:30.209443 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:59:30.214060 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 23:59:30.218324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 23:59:30.356310 ignition[897]: Ignition 2.22.0 Oct 29 23:59:30.356323 ignition[897]: Stage: kargs Oct 29 23:59:30.356485 ignition[897]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:30.356496 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:59:30.359628 ignition[897]: kargs: kargs passed Oct 29 23:59:30.359799 ignition[897]: Ignition finished successfully Oct 29 23:59:30.369133 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 23:59:30.372394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 23:59:30.506903 ignition[905]: Ignition 2.22.0 Oct 29 23:59:30.506921 ignition[905]: Stage: disks Oct 29 23:59:30.507133 ignition[905]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:30.507147 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:59:30.511354 ignition[905]: disks: disks passed Oct 29 23:59:30.511434 ignition[905]: Ignition finished successfully Oct 29 23:59:30.514868 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 23:59:30.517931 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 23:59:30.521336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 23:59:30.521984 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:59:30.522539 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:59:30.522822 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:59:30.524278 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 23:59:30.568879 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 23:59:30.577053 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 23:59:30.584063 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 23:59:30.762148 kernel: EXT4-fs (vda9): mounted filesystem 357f8fb5-672c-465c-a10c-74ee57b7ef1c r/w with ordered data mode. Quota mode: none. Oct 29 23:59:30.762985 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 23:59:30.765132 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 23:59:30.769653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:59:30.773179 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 23:59:30.774718 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 23:59:30.774771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 23:59:30.774806 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:59:30.790680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 23:59:30.795571 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Oct 29 23:59:30.794255 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 23:59:30.804278 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:30.804301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:30.804313 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:59:30.804325 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:59:30.806297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:59:30.861844 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 23:59:30.868497 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Oct 29 23:59:30.873115 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 23:59:30.879803 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 23:59:31.004031 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 23:59:31.010371 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 23:59:31.013698 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 23:59:31.038772 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 23:59:31.042690 kernel: BTRFS info (device vda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:31.058867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 23:59:31.080206 ignition[1037]: INFO : Ignition 2.22.0 Oct 29 23:59:31.080206 ignition[1037]: INFO : Stage: mount Oct 29 23:59:31.082755 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:31.082755 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:59:31.086911 ignition[1037]: INFO : mount: mount passed Oct 29 23:59:31.088162 ignition[1037]: INFO : Ignition finished successfully Oct 29 23:59:31.092389 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 23:59:31.096627 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 23:59:31.764975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:59:31.796934 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Oct 29 23:59:31.796970 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:31.796983 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:31.803302 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:59:31.803326 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:59:31.805409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:59:31.874449 ignition[1067]: INFO : Ignition 2.22.0 Oct 29 23:59:31.874449 ignition[1067]: INFO : Stage: files Oct 29 23:59:31.877681 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:31.877681 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:59:31.877681 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Oct 29 23:59:31.883633 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 23:59:31.883633 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 23:59:31.891897 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 23:59:31.894590 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 23:59:31.897143 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 23:59:31.895771 unknown[1067]: wrote ssh authorized keys file for user: core Oct 29 23:59:31.902673 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 23:59:31.906387 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 29 23:59:31.971116 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 23:59:32.060531 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 23:59:32.060531 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 23:59:32.066577 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 23:59:32.069333 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:59:32.072283 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:59:32.075111 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:59:32.078055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:59:32.080946 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:59:32.084177 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:59:32.091352 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:59:32.094560 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:59:32.097630 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 23:59:32.097630 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 23:59:32.097630 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 23:59:32.097630 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 29 23:59:32.403154 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 23:59:33.159084 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 23:59:33.159084 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 23:59:33.166019 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 23:59:33.169290 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 23:59:33.198739 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:59:33.206831 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:59:33.209372 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 23:59:33.209372 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 23:59:33.209372 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 23:59:33.209372 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:59:33.209372 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:59:33.209372 ignition[1067]: INFO : files: files passed Oct 29 23:59:33.209372 ignition[1067]: INFO : Ignition finished successfully Oct 29 23:59:33.215639 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 23:59:33.219326 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 23:59:33.236841 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 23:59:33.239857 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 23:59:33.239979 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 23:59:33.254605 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 23:59:33.259722 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:59:33.259722 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:59:33.265090 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:59:33.269367 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:59:33.270211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 23:59:33.275017 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 23:59:33.329162 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 23:59:33.329394 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 23:59:33.330943 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 23:59:33.336031 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 23:59:33.343008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 23:59:33.344245 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 23:59:33.385827 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:59:33.390365 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 23:59:33.414264 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:59:33.414483 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:59:33.415826 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:59:33.424564 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 23:59:33.425723 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 23:59:33.425868 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:59:33.431628 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 23:59:33.435406 systemd[1]: Stopped target basic.target - Basic System. Oct 29 23:59:33.436734 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 23:59:33.440849 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:59:33.441772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 23:59:33.448603 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:59:33.455416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 23:59:33.456597 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:59:33.460057 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 23:59:33.464716 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 23:59:33.468669 systemd[1]: Stopped target swap.target - Swaps. Oct 29 23:59:33.471732 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 23:59:33.471853 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:59:33.476694 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:59:33.479974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:59:33.480818 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 23:59:33.485172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:59:33.488561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 23:59:33.488676 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 23:59:33.493778 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 23:59:33.493897 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:59:33.497244 systemd[1]: Stopped target paths.target - Path Units. Oct 29 23:59:33.498042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 23:59:33.502178 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:59:33.505075 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 23:59:33.505967 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 23:59:33.511776 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 23:59:33.511868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:59:33.514651 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 23:59:33.514738 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:59:33.517645 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 23:59:33.517775 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:59:33.520847 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 23:59:33.520956 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 23:59:33.528716 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 23:59:33.531874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 23:59:33.531993 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:59:33.533646 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 23:59:33.537588 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 23:59:33.537756 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:59:33.538259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 23:59:33.538402 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:59:33.539074 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 23:59:33.539269 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:59:33.558891 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 23:59:33.559034 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 23:59:33.587484 ignition[1124]: INFO : Ignition 2.22.0 Oct 29 23:59:33.587484 ignition[1124]: INFO : Stage: umount Oct 29 23:59:33.590249 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:33.590249 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:59:33.590249 ignition[1124]: INFO : umount: umount passed Oct 29 23:59:33.590249 ignition[1124]: INFO : Ignition finished successfully Oct 29 23:59:33.596993 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 23:59:33.597167 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 23:59:33.599136 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 23:59:33.601982 systemd[1]: Stopped target network.target - Network. Oct 29 23:59:33.604664 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 23:59:33.604739 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 23:59:33.607486 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 23:59:33.607572 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 23:59:33.608600 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 23:59:33.608657 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 23:59:33.614838 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 23:59:33.614908 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 23:59:33.618014 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 23:59:33.618832 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 23:59:33.640595 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 23:59:33.640778 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 23:59:33.647632 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 23:59:33.647801 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 23:59:33.654214 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 23:59:33.654994 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 23:59:33.655069 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:59:33.660129 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 23:59:33.664822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 23:59:33.664914 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:59:33.672636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 23:59:33.672717 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:59:33.673933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 23:59:33.673992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 23:59:33.679094 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:59:33.689504 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 23:59:33.689705 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 23:59:33.690785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 23:59:33.690854 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 23:59:33.699661 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 23:59:33.699872 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:59:33.703378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 23:59:33.703437 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 23:59:33.703992 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 23:59:33.704031 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:59:33.708960 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 23:59:33.709017 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:59:33.715166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 23:59:33.715247 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 23:59:33.716669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 23:59:33.716763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:59:33.727138 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 23:59:33.727821 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 23:59:33.727909 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:59:33.732843 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 23:59:33.732931 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:59:33.733850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:59:33.733983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:33.759753 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 23:59:33.759905 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 23:59:33.761322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 23:59:33.761429 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 23:59:33.769085 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 23:59:33.770573 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 23:59:33.783610 systemd[1]: Switching root. Oct 29 23:59:33.814028 systemd-journald[316]: Journal stopped Oct 29 23:59:35.481622 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Oct 29 23:59:35.481688 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 23:59:35.481710 kernel: SELinux: policy capability open_perms=1 Oct 29 23:59:35.481741 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 23:59:35.481753 kernel: SELinux: policy capability always_check_network=0 Oct 29 23:59:35.481765 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 23:59:35.481782 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 23:59:35.481794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 23:59:35.481806 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 23:59:35.481819 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 23:59:35.481838 kernel: audit: type=1403 audit(1761782374.409:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 23:59:35.481855 systemd[1]: Successfully loaded SELinux policy in 112.836ms. Oct 29 23:59:35.481882 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.226ms. Oct 29 23:59:35.481896 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:59:35.481909 systemd[1]: Detected virtualization kvm. Oct 29 23:59:35.481922 systemd[1]: Detected architecture x86-64. Oct 29 23:59:35.481935 systemd[1]: Detected first boot. Oct 29 23:59:35.481955 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 23:59:35.481969 zram_generator::config[1171]: No configuration found. Oct 29 23:59:35.481983 kernel: Guest personality initialized and is inactive Oct 29 23:59:35.481996 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 23:59:35.482008 kernel: Initialized host personality Oct 29 23:59:35.482025 kernel: NET: Registered PF_VSOCK protocol family Oct 29 23:59:35.482039 systemd[1]: Populated /etc with preset unit settings. Oct 29 23:59:35.482060 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 23:59:35.482073 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 23:59:35.482086 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 23:59:35.482112 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 23:59:35.482125 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 23:59:35.482138 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 23:59:35.482151 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 23:59:35.482174 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 23:59:35.482187 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 23:59:35.482200 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 23:59:35.482212 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 23:59:35.482226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:59:35.482239 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:59:35.482252 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 23:59:35.482272 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 23:59:35.482286 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 23:59:35.482299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:59:35.482313 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 23:59:35.482325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:59:35.482338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:59:35.482360 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 23:59:35.482373 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 23:59:35.482385 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 23:59:35.482399 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 23:59:35.482411 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:59:35.482424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:59:35.482437 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:59:35.482464 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:59:35.482477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 23:59:35.482490 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 23:59:35.482504 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 23:59:35.482517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:59:35.482531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:59:35.482544 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:59:35.482565 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 23:59:35.482578 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 23:59:35.482590 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 23:59:35.482603 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 23:59:35.482616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:35.482628 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 23:59:35.482641 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 23:59:35.482662 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 23:59:35.482675 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 23:59:35.482688 systemd[1]: Reached target machines.target - Containers. Oct 29 23:59:35.482700 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 23:59:35.482714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:59:35.482727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:59:35.482739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:59:35.482759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:59:35.482772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:59:35.482785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:59:35.482798 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 23:59:35.482811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:59:35.482824 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 23:59:35.482843 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 23:59:35.482856 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 23:59:35.482869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 23:59:35.482882 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 23:59:35.482896 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:35.482908 kernel: fuse: init (API version 7.41) Oct 29 23:59:35.482921 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:59:35.482941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:59:35.482955 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:59:35.482969 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 23:59:35.482981 kernel: ACPI: bus type drm_connector registered Oct 29 23:59:35.483000 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 23:59:35.483013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:59:35.483027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:35.483040 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 23:59:35.483053 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 23:59:35.483066 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 23:59:35.483239 systemd-journald[1235]: Collecting audit messages is disabled. Oct 29 23:59:35.483283 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 23:59:35.483296 systemd-journald[1235]: Journal started Oct 29 23:59:35.483319 systemd-journald[1235]: Runtime Journal (/run/log/journal/5e1f703b74b74c2f9a5a2333c39dbeff) is 6M, max 48.1M, 42.1M free. Oct 29 23:59:35.171376 systemd[1]: Queued start job for default target multi-user.target. Oct 29 23:59:35.186306 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 23:59:35.186862 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 23:59:35.486161 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:59:35.489601 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 23:59:35.491661 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 23:59:35.493654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:59:35.496483 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:59:35.496741 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:59:35.499148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:59:35.499447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:59:35.501752 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:59:35.502000 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:59:35.504145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:59:35.504479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:59:35.506852 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 23:59:35.507160 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 23:59:35.509363 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:59:35.509625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:59:35.511880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:59:35.612733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:59:35.616847 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 23:59:35.619677 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 23:59:35.637976 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 23:59:35.644879 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:59:35.647568 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 23:59:35.651318 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 23:59:35.654378 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 23:59:35.656389 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 23:59:35.656433 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:59:35.659289 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 23:59:35.661636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:35.666233 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 23:59:35.670170 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 23:59:35.672146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:59:35.674209 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 23:59:35.677223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:59:35.678317 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:59:35.681406 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 23:59:35.687376 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 23:59:35.692298 systemd-journald[1235]: Time spent on flushing to /var/log/journal/5e1f703b74b74c2f9a5a2333c39dbeff is 18.953ms for 1049 entries. Oct 29 23:59:35.692298 systemd-journald[1235]: System Journal (/var/log/journal/5e1f703b74b74c2f9a5a2333c39dbeff) is 8M, max 163.5M, 155.5M free. Oct 29 23:59:35.729023 systemd-journald[1235]: Received client request to flush runtime journal. Oct 29 23:59:35.729333 kernel: loop1: detected capacity change from 0 to 110976 Oct 29 23:59:35.690820 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 23:59:35.695353 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 23:59:35.700198 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:59:35.703212 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 23:59:35.706748 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 23:59:35.710979 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 23:59:35.731883 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 23:59:35.737916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:59:35.751816 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 23:59:35.756923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:59:35.759729 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:59:35.762165 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 23:59:35.773112 kernel: loop2: detected capacity change from 0 to 224512 Oct 29 23:59:35.778077 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 23:59:35.792388 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Oct 29 23:59:35.792409 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Oct 29 23:59:35.796973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:59:35.802158 kernel: loop3: detected capacity change from 0 to 128048 Oct 29 23:59:35.830341 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 23:59:35.838119 kernel: loop4: detected capacity change from 0 to 110976 Oct 29 23:59:35.850131 kernel: loop5: detected capacity change from 0 to 224512 Oct 29 23:59:35.858123 kernel: loop6: detected capacity change from 0 to 128048 Oct 29 23:59:35.865734 (sd-merge)[1319]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 23:59:35.870962 (sd-merge)[1319]: Merged extensions into '/usr'. Oct 29 23:59:35.878324 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 23:59:35.878344 systemd[1]: Reloading... Oct 29 23:59:35.904335 systemd-resolved[1305]: Positive Trust Anchors: Oct 29 23:59:35.904352 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:59:35.904356 systemd-resolved[1305]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 23:59:35.904388 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:59:35.909544 systemd-resolved[1305]: Defaulting to hostname 'linux'. Oct 29 23:59:35.941218 zram_generator::config[1352]: No configuration found. Oct 29 23:59:36.157841 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 23:59:36.158381 systemd[1]: Reloading finished in 279 ms. Oct 29 23:59:36.185782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:59:36.189447 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 23:59:36.194474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:59:36.218913 systemd[1]: Starting ensure-sysext.service... Oct 29 23:59:36.221470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:59:36.242446 systemd[1]: Reload requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Oct 29 23:59:36.242464 systemd[1]: Reloading... Oct 29 23:59:36.249874 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 23:59:36.250227 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 23:59:36.250580 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 23:59:36.250897 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 23:59:36.251932 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 23:59:36.252296 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Oct 29 23:59:36.252385 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Oct 29 23:59:36.258674 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:59:36.258686 systemd-tmpfiles[1386]: Skipping /boot Oct 29 23:59:36.270317 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:59:36.270328 systemd-tmpfiles[1386]: Skipping /boot Oct 29 23:59:36.391147 zram_generator::config[1419]: No configuration found. Oct 29 23:59:36.595246 systemd[1]: Reloading finished in 352 ms. Oct 29 23:59:36.622548 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 23:59:36.660399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:59:36.672867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:59:36.676914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 23:59:36.680634 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 23:59:36.683868 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 23:59:36.687678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:59:36.691150 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 23:59:36.696825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:36.697041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:59:36.701599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:59:36.706323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:59:36.711353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:59:36.713768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:36.713903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:36.714628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:36.718751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:36.719352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:59:36.720027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:36.720360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:36.720906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:36.728469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:36.728815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:59:36.738182 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:59:36.741299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:36.741414 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:36.741560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:36.743430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:59:36.743674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:59:36.746497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:59:36.746725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:59:36.753841 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:59:36.754120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:59:36.761451 systemd[1]: Finished ensure-sysext.service. Oct 29 23:59:36.763752 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:59:36.763992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:59:36.774407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:59:36.774754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:59:36.780344 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 23:59:36.783017 systemd-udevd[1459]: Using default interface naming scheme 'v257'. Oct 29 23:59:36.783681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 23:59:36.790735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 23:59:36.824743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:59:36.841712 augenrules[1492]: No rules Oct 29 23:59:36.853329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:59:36.856814 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:59:36.857131 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:59:36.895283 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 23:59:36.901450 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 23:59:36.942829 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 23:59:36.947449 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 23:59:37.006126 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 23:59:37.009063 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 23:59:37.012309 systemd-networkd[1501]: lo: Link UP Oct 29 23:59:37.012601 systemd-networkd[1501]: lo: Gained carrier Oct 29 23:59:37.013170 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 23:59:37.015422 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:59:37.017716 systemd[1]: Reached target network.target - Network. Oct 29 23:59:37.020679 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 23:59:37.022752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 23:59:37.048933 systemd-networkd[1501]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:59:37.048948 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:59:37.049838 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 23:59:37.050054 systemd-networkd[1501]: eth0: Link UP Oct 29 23:59:37.052661 systemd-networkd[1501]: eth0: Gained carrier Oct 29 23:59:37.052699 systemd-networkd[1501]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:59:37.053119 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 23:59:37.056867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 23:59:37.064215 systemd-networkd[1501]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 23:59:37.064877 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Oct 29 23:59:37.066613 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 23:59:37.066670 systemd-timesyncd[1477]: Initial clock synchronization to Wed 2025-10-29 23:59:37.057094 UTC. Oct 29 23:59:37.079586 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 29 23:59:37.082303 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 23:59:37.094125 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 23:59:37.120123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 23:59:37.140124 kernel: ACPI: button: Power Button [PWRF] Oct 29 23:59:37.215766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:37.331307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:59:37.331864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:37.338458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:37.414949 kernel: kvm_amd: TSC scaling supported Oct 29 23:59:37.415175 kernel: kvm_amd: Nested Virtualization enabled Oct 29 23:59:37.415197 kernel: kvm_amd: Nested Paging enabled Oct 29 23:59:37.416289 kernel: kvm_amd: LBR virtualization supported Oct 29 23:59:37.416316 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 23:59:37.417279 kernel: kvm_amd: Virtual GIF supported Oct 29 23:59:37.449194 kernel: EDAC MC: Ver: 3.0.0 Oct 29 23:59:37.494133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:37.585899 ldconfig[1457]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 23:59:37.594609 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 23:59:37.598843 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 23:59:37.696155 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 23:59:37.698791 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:59:37.701042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 23:59:37.703537 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 23:59:37.706070 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 23:59:37.708559 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 23:59:37.710817 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 23:59:37.713360 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 23:59:37.715835 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 23:59:37.715872 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:59:37.717710 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:59:37.720533 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 23:59:37.724700 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 23:59:37.729008 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 23:59:37.731739 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 23:59:37.734240 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 23:59:37.747551 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 23:59:37.749805 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 23:59:37.752619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 23:59:37.755404 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:59:37.757021 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:59:37.758813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:59:37.758852 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:59:37.760359 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 23:59:37.763738 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 23:59:37.766856 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 23:59:37.775277 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 23:59:37.778283 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 23:59:37.780481 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 23:59:37.781971 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 23:59:37.786865 jq[1577]: false Oct 29 23:59:37.787525 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 23:59:37.792208 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 23:59:37.795445 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 23:59:37.801145 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Oct 29 23:59:37.801387 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 23:59:37.802147 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Oct 29 23:59:37.809420 extend-filesystems[1578]: Found /dev/vda6 Oct 29 23:59:37.817492 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 23:59:37.818609 extend-filesystems[1578]: Found /dev/vda9 Oct 29 23:59:37.820510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 23:59:37.821986 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Oct 29 23:59:37.821986 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 23:59:37.821986 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Oct 29 23:59:37.821503 oslogin_cache_refresh[1579]: Failure getting users, quitting Oct 29 23:59:37.821527 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 23:59:37.821594 oslogin_cache_refresh[1579]: Refreshing group entry cache Oct 29 23:59:37.822567 extend-filesystems[1578]: Checking size of /dev/vda9 Oct 29 23:59:37.825472 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 23:59:37.826425 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 23:59:37.833267 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Oct 29 23:59:37.833267 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 23:59:37.831441 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 23:59:37.831319 oslogin_cache_refresh[1579]: Failure getting groups, quitting Oct 29 23:59:37.831331 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 23:59:37.837279 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 23:59:37.840742 extend-filesystems[1578]: Resized partition /dev/vda9 Oct 29 23:59:37.842411 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 23:59:37.842685 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 23:59:37.843017 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 23:59:37.843334 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 23:59:37.845159 jq[1598]: true Oct 29 23:59:37.846485 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 23:59:37.846749 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 23:59:37.847468 extend-filesystems[1605]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 23:59:37.851775 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 23:59:37.852317 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 23:59:37.858340 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 23:59:37.869825 update_engine[1597]: I20251029 23:59:37.869261 1597 main.cc:92] Flatcar Update Engine starting Oct 29 23:59:37.891164 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 23:59:37.922376 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 23:59:37.948320 jq[1612]: true Oct 29 23:59:37.949958 extend-filesystems[1605]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 23:59:37.949958 extend-filesystems[1605]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 23:59:37.949958 extend-filesystems[1605]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 23:59:37.965844 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Oct 29 23:59:37.953161 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 23:59:37.970253 tar[1609]: linux-amd64/LICENSE Oct 29 23:59:37.970253 tar[1609]: linux-amd64/helm Oct 29 23:59:37.953767 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 23:59:38.009360 dbus-daemon[1575]: [system] SELinux support is enabled Oct 29 23:59:38.009643 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 23:59:38.013998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 23:59:38.014045 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 23:59:38.015359 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 23:59:38.015394 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 23:59:38.016876 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 23:59:38.016896 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 23:59:38.017234 systemd-logind[1592]: New seat seat0. Oct 29 23:59:38.021053 update_engine[1597]: I20251029 23:59:38.020982 1597 update_check_scheduler.cc:74] Next update check in 8m53s Oct 29 23:59:38.022540 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 23:59:38.034766 systemd[1]: Started update-engine.service - Update Engine. Oct 29 23:59:38.039709 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 23:59:38.101436 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Oct 29 23:59:38.135216 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 23:59:38.138591 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 23:59:38.236060 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 23:59:38.331686 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 23:59:38.384971 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 23:59:38.390904 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 23:59:38.426061 containerd[1621]: time="2025-10-29T23:59:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 23:59:38.427540 containerd[1621]: time="2025-10-29T23:59:38.427464324Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 23:59:38.442342 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 23:59:38.442655 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445379998Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.459µs" Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445428385Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445453115Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445667804Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445682347Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445709239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445794504Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:59:38.445871 containerd[1621]: time="2025-10-29T23:59:38.445806493Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446576 containerd[1621]: time="2025-10-29T23:59:38.446082109Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446576 containerd[1621]: time="2025-10-29T23:59:38.446124536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446576 containerd[1621]: time="2025-10-29T23:59:38.446135624Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446576 containerd[1621]: time="2025-10-29T23:59:38.446143526Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446576 containerd[1621]: time="2025-10-29T23:59:38.446309949Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446700 containerd[1621]: time="2025-10-29T23:59:38.446614510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446700 containerd[1621]: time="2025-10-29T23:59:38.446648895Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:59:38.446700 containerd[1621]: time="2025-10-29T23:59:38.446659181Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 23:59:38.446700 containerd[1621]: time="2025-10-29T23:59:38.446690561Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 23:59:38.446946 containerd[1621]: time="2025-10-29T23:59:38.446917409Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 23:59:38.447035 containerd[1621]: time="2025-10-29T23:59:38.447002093Z" level=info msg="metadata content store policy set" policy=shared Oct 29 23:59:38.448579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 23:59:38.455804 containerd[1621]: time="2025-10-29T23:59:38.455708117Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 23:59:38.455904 containerd[1621]: time="2025-10-29T23:59:38.455885478Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 23:59:38.456071 containerd[1621]: time="2025-10-29T23:59:38.456045581Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 23:59:38.456167 containerd[1621]: time="2025-10-29T23:59:38.456148484Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 23:59:38.456250 containerd[1621]: time="2025-10-29T23:59:38.456222591Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 23:59:38.456327 containerd[1621]: time="2025-10-29T23:59:38.456309578Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 23:59:38.456424 containerd[1621]: time="2025-10-29T23:59:38.456406242Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 23:59:38.456499 containerd[1621]: time="2025-10-29T23:59:38.456484075Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 23:59:38.456585 containerd[1621]: time="2025-10-29T23:59:38.456550730Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 23:59:38.456585 containerd[1621]: time="2025-10-29T23:59:38.456571412Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 23:59:38.456585 containerd[1621]: time="2025-10-29T23:59:38.456585124Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 23:59:38.456743 containerd[1621]: time="2025-10-29T23:59:38.456604264Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 23:59:38.456799 containerd[1621]: time="2025-10-29T23:59:38.456777338Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 23:59:38.456833 containerd[1621]: time="2025-10-29T23:59:38.456813235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 23:59:38.456855 containerd[1621]: time="2025-10-29T23:59:38.456840908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 23:59:38.456882 containerd[1621]: time="2025-10-29T23:59:38.456859698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 23:59:38.456902 containerd[1621]: time="2025-10-29T23:59:38.456883065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 23:59:38.456922 containerd[1621]: time="2025-10-29T23:59:38.456899721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 23:59:38.456922 containerd[1621]: time="2025-10-29T23:59:38.456918390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 23:59:38.456965 containerd[1621]: time="2025-10-29T23:59:38.456938323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 23:59:38.456990 containerd[1621]: time="2025-10-29T23:59:38.456964704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 23:59:38.457011 containerd[1621]: time="2025-10-29T23:59:38.456988833Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 23:59:38.457011 containerd[1621]: time="2025-10-29T23:59:38.457004387Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 23:59:38.457166 containerd[1621]: time="2025-10-29T23:59:38.457140903Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 23:59:38.457203 containerd[1621]: time="2025-10-29T23:59:38.457171872Z" level=info msg="Start snapshots syncer" Oct 29 23:59:38.457246 containerd[1621]: time="2025-10-29T23:59:38.457216473Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 23:59:38.457711 containerd[1621]: time="2025-10-29T23:59:38.457649908Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 23:59:38.457867 containerd[1621]: time="2025-10-29T23:59:38.457724135Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 23:59:38.457941 containerd[1621]: time="2025-10-29T23:59:38.457911682Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 23:59:38.458138 containerd[1621]: time="2025-10-29T23:59:38.458090224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 23:59:38.458163 containerd[1621]: time="2025-10-29T23:59:38.458147123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 23:59:38.458199 containerd[1621]: time="2025-10-29T23:59:38.458163429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 23:59:38.458199 containerd[1621]: time="2025-10-29T23:59:38.458179485Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 23:59:38.458199 containerd[1621]: time="2025-10-29T23:59:38.458193787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 23:59:38.458265 containerd[1621]: time="2025-10-29T23:59:38.458207579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 23:59:38.458265 containerd[1621]: time="2025-10-29T23:59:38.458226109Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 23:59:38.458303 containerd[1621]: time="2025-10-29T23:59:38.458277971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 23:59:38.458303 containerd[1621]: time="2025-10-29T23:59:38.458297762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 23:59:38.458345 containerd[1621]: time="2025-10-29T23:59:38.458312625Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 23:59:38.458398 containerd[1621]: time="2025-10-29T23:59:38.458376646Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:59:38.460072 containerd[1621]: time="2025-10-29T23:59:38.459970576Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:59:38.460579 containerd[1621]: time="2025-10-29T23:59:38.460543362Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:59:38.460603 containerd[1621]: time="2025-10-29T23:59:38.460576725Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:59:38.460603 containerd[1621]: time="2025-10-29T23:59:38.460591448Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 23:59:38.460653 containerd[1621]: time="2025-10-29T23:59:38.460606933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 23:59:38.460653 containerd[1621]: time="2025-10-29T23:59:38.460629117Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 23:59:38.460690 containerd[1621]: time="2025-10-29T23:59:38.460663001Z" level=info msg="runtime interface created" Oct 29 23:59:38.460690 containerd[1621]: time="2025-10-29T23:59:38.460672546Z" level=info msg="created NRI interface" Oct 29 23:59:38.460690 containerd[1621]: time="2025-10-29T23:59:38.460685227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 23:59:38.460743 containerd[1621]: time="2025-10-29T23:59:38.460703475Z" level=info msg="Connect containerd service" Oct 29 23:59:38.460743 containerd[1621]: time="2025-10-29T23:59:38.460736847Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 23:59:38.461794 containerd[1621]: time="2025-10-29T23:59:38.461748417Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 23:59:38.479362 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 23:59:38.484818 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 23:59:38.502682 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 23:59:38.505813 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 23:59:38.571618 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 23:59:38.576935 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:53890.service - OpenSSH per-connection server daemon (10.0.0.1:53890). Oct 29 23:59:38.674614 tar[1609]: linux-amd64/README.md Oct 29 23:59:38.713716 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 53890 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:38.716042 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:38.809424 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 23:59:38.813754 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 23:59:38.818047 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 23:59:38.828846 systemd-logind[1592]: New session 1 of user core. Oct 29 23:59:38.852645 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 23:59:38.858310 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 23:59:38.860557 containerd[1621]: time="2025-10-29T23:59:38.860475678Z" level=info msg="Start subscribing containerd event" Oct 29 23:59:38.860634 containerd[1621]: time="2025-10-29T23:59:38.860607736Z" level=info msg="Start recovering state" Oct 29 23:59:38.860874 containerd[1621]: time="2025-10-29T23:59:38.860852924Z" level=info msg="Start event monitor" Oct 29 23:59:38.860929 containerd[1621]: time="2025-10-29T23:59:38.860890303Z" level=info msg="Start cni network conf syncer for default" Oct 29 23:59:38.860929 containerd[1621]: time="2025-10-29T23:59:38.860902592Z" level=info msg="Start streaming server" Oct 29 23:59:38.860929 containerd[1621]: time="2025-10-29T23:59:38.860918668Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 23:59:38.860996 containerd[1621]: time="2025-10-29T23:59:38.860934623Z" level=info msg="runtime interface starting up..." Oct 29 23:59:38.860996 containerd[1621]: time="2025-10-29T23:59:38.860947864Z" level=info msg="starting plugins..." Oct 29 23:59:38.860996 containerd[1621]: time="2025-10-29T23:59:38.860974787Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 23:59:38.861250 containerd[1621]: time="2025-10-29T23:59:38.861204339Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 23:59:38.861323 containerd[1621]: time="2025-10-29T23:59:38.861294682Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 23:59:38.862211 containerd[1621]: time="2025-10-29T23:59:38.862188666Z" level=info msg="containerd successfully booted in 0.436795s" Oct 29 23:59:38.862274 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 23:59:38.880764 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 23:59:38.883525 systemd-logind[1592]: New session c1 of user core. Oct 29 23:59:38.967263 systemd-networkd[1501]: eth0: Gained IPv6LL Oct 29 23:59:38.970527 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 23:59:38.973109 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 23:59:38.976479 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 23:59:38.979559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:59:38.983304 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 23:59:39.022482 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 23:59:39.025718 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 23:59:39.026021 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 23:59:39.028699 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 23:59:39.055494 systemd[1697]: Queued start job for default target default.target. Oct 29 23:59:39.057702 systemd[1697]: Created slice app.slice - User Application Slice. Oct 29 23:59:39.057745 systemd[1697]: Reached target paths.target - Paths. Oct 29 23:59:39.057822 systemd[1697]: Reached target timers.target - Timers. Oct 29 23:59:39.060300 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 23:59:39.082660 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 23:59:39.082838 systemd[1697]: Reached target sockets.target - Sockets. Oct 29 23:59:39.082896 systemd[1697]: Reached target basic.target - Basic System. Oct 29 23:59:39.082940 systemd[1697]: Reached target default.target - Main User Target. Oct 29 23:59:39.083007 systemd[1697]: Startup finished in 191ms. Oct 29 23:59:39.083179 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 23:59:39.088847 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 23:59:39.159381 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:53896.service - OpenSSH per-connection server daemon (10.0.0.1:53896). Oct 29 23:59:39.217644 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 53896 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:39.219028 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:39.223252 systemd-logind[1592]: New session 2 of user core. Oct 29 23:59:39.231231 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 23:59:39.286219 sshd[1729]: Connection closed by 10.0.0.1 port 53896 Oct 29 23:59:39.286713 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:39.296670 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:53896.service: Deactivated successfully. Oct 29 23:59:39.299039 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 23:59:39.299858 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Oct 29 23:59:39.303370 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:53912.service - OpenSSH per-connection server daemon (10.0.0.1:53912). Oct 29 23:59:39.306524 systemd-logind[1592]: Removed session 2. Oct 29 23:59:39.347502 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 53912 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:39.348758 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:39.353518 systemd-logind[1592]: New session 3 of user core. Oct 29 23:59:39.363237 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 23:59:39.418855 sshd[1738]: Connection closed by 10.0.0.1 port 53912 Oct 29 23:59:39.419232 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:39.423747 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:53912.service: Deactivated successfully. Oct 29 23:59:39.426395 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 23:59:39.427264 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Oct 29 23:59:39.429843 systemd-logind[1592]: Removed session 3. Oct 29 23:59:39.828329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:59:39.830920 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 23:59:39.833253 systemd[1]: Startup finished in 2.946s (kernel) + 7.507s (initrd) + 5.535s (userspace) = 15.989s. Oct 29 23:59:39.833704 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:59:40.431955 kubelet[1748]: E1029 23:59:40.431866 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:59:40.435858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:59:40.436065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:59:40.436492 systemd[1]: kubelet.service: Consumed 1.224s CPU time, 265M memory peak. Oct 29 23:59:49.434678 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:50156.service - OpenSSH per-connection server daemon (10.0.0.1:50156). Oct 29 23:59:49.486913 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 50156 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:49.488477 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:49.494401 systemd-logind[1592]: New session 4 of user core. Oct 29 23:59:49.501269 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 23:59:49.557739 sshd[1764]: Connection closed by 10.0.0.1 port 50156 Oct 29 23:59:49.558052 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:49.575772 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:50156.service: Deactivated successfully. Oct 29 23:59:49.578443 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 23:59:49.579472 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Oct 29 23:59:49.583715 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:50168.service - OpenSSH per-connection server daemon (10.0.0.1:50168). Oct 29 23:59:49.584365 systemd-logind[1592]: Removed session 4. Oct 29 23:59:49.638758 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 50168 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:49.640078 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:49.646072 systemd-logind[1592]: New session 5 of user core. Oct 29 23:59:49.660307 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 23:59:49.711263 sshd[1773]: Connection closed by 10.0.0.1 port 50168 Oct 29 23:59:49.712423 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:49.723243 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:50168.service: Deactivated successfully. Oct 29 23:59:49.726321 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 23:59:49.727286 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Oct 29 23:59:49.731389 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:50180.service - OpenSSH per-connection server daemon (10.0.0.1:50180). Oct 29 23:59:49.731917 systemd-logind[1592]: Removed session 5. Oct 29 23:59:49.787766 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 50180 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:49.789375 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:49.794900 systemd-logind[1592]: New session 6 of user core. Oct 29 23:59:49.805403 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 23:59:49.862580 sshd[1782]: Connection closed by 10.0.0.1 port 50180 Oct 29 23:59:49.862905 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:49.873825 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:50180.service: Deactivated successfully. Oct 29 23:59:49.875851 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 23:59:49.876647 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Oct 29 23:59:49.879784 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:50188.service - OpenSSH per-connection server daemon (10.0.0.1:50188). Oct 29 23:59:49.880736 systemd-logind[1592]: Removed session 6. Oct 29 23:59:49.937255 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 50188 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:49.938656 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:49.943369 systemd-logind[1592]: New session 7 of user core. Oct 29 23:59:49.956342 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 23:59:50.025723 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 23:59:50.026158 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:59:50.048548 sudo[1792]: pam_unix(sudo:session): session closed for user root Oct 29 23:59:50.050958 sshd[1791]: Connection closed by 10.0.0.1 port 50188 Oct 29 23:59:50.051418 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:50.063130 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:50188.service: Deactivated successfully. Oct 29 23:59:50.065051 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 23:59:50.065897 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Oct 29 23:59:50.069159 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:35850.service - OpenSSH per-connection server daemon (10.0.0.1:35850). Oct 29 23:59:50.069854 systemd-logind[1592]: Removed session 7. Oct 29 23:59:50.127985 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 35850 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:50.129751 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:50.135131 systemd-logind[1592]: New session 8 of user core. Oct 29 23:59:50.145303 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 23:59:50.203664 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 23:59:50.204078 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:59:50.212130 sudo[1804]: pam_unix(sudo:session): session closed for user root Oct 29 23:59:50.221261 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 23:59:50.221674 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:59:50.235771 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:59:50.291705 augenrules[1826]: No rules Oct 29 23:59:50.293720 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:59:50.294145 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:59:50.295573 sudo[1803]: pam_unix(sudo:session): session closed for user root Oct 29 23:59:50.297278 sshd[1802]: Connection closed by 10.0.0.1 port 35850 Oct 29 23:59:50.297597 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Oct 29 23:59:50.315141 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:35850.service: Deactivated successfully. Oct 29 23:59:50.317707 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 23:59:50.318671 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Oct 29 23:59:50.322494 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:35864.service - OpenSSH per-connection server daemon (10.0.0.1:35864). Oct 29 23:59:50.323185 systemd-logind[1592]: Removed session 8. Oct 29 23:59:50.371961 sshd[1835]: Accepted publickey for core from 10.0.0.1 port 35864 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 29 23:59:50.373552 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:59:50.378758 systemd-logind[1592]: New session 9 of user core. Oct 29 23:59:50.393287 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 23:59:50.451327 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 23:59:50.451682 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:59:50.452888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 23:59:50.454851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:59:50.744154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:59:50.762456 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:59:50.852431 kubelet[1862]: E1029 23:59:50.852342 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:59:50.859801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:59:50.860044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:59:50.860454 systemd[1]: kubelet.service: Consumed 315ms CPU time, 111.3M memory peak. Oct 29 23:59:51.074678 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 23:59:51.120673 (dockerd)[1875]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 23:59:51.779053 dockerd[1875]: time="2025-10-29T23:59:51.778984386Z" level=info msg="Starting up" Oct 29 23:59:51.779843 dockerd[1875]: time="2025-10-29T23:59:51.779813659Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 23:59:51.803835 dockerd[1875]: time="2025-10-29T23:59:51.803761785Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 23:59:52.161762 dockerd[1875]: time="2025-10-29T23:59:52.161655025Z" level=info msg="Loading containers: start." Oct 29 23:59:52.175120 kernel: Initializing XFRM netlink socket Oct 29 23:59:52.479759 systemd-networkd[1501]: docker0: Link UP Oct 29 23:59:52.484787 dockerd[1875]: time="2025-10-29T23:59:52.484743015Z" level=info msg="Loading containers: done." Oct 29 23:59:52.505160 dockerd[1875]: time="2025-10-29T23:59:52.505112251Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 23:59:52.505371 dockerd[1875]: time="2025-10-29T23:59:52.505208973Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 23:59:52.505371 dockerd[1875]: time="2025-10-29T23:59:52.505316845Z" level=info msg="Initializing buildkit" Oct 29 23:59:52.506738 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3815448840-merged.mount: Deactivated successfully. Oct 29 23:59:52.545882 dockerd[1875]: time="2025-10-29T23:59:52.545807318Z" level=info msg="Completed buildkit initialization" Oct 29 23:59:52.551991 dockerd[1875]: time="2025-10-29T23:59:52.551950339Z" level=info msg="Daemon has completed initialization" Oct 29 23:59:52.552089 dockerd[1875]: time="2025-10-29T23:59:52.552033700Z" level=info msg="API listen on /run/docker.sock" Oct 29 23:59:52.552374 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 23:59:53.510883 containerd[1621]: time="2025-10-29T23:59:53.510804002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 23:59:54.096629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167647193.mount: Deactivated successfully. Oct 29 23:59:55.349592 containerd[1621]: time="2025-10-29T23:59:55.349522624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:55.350278 containerd[1621]: time="2025-10-29T23:59:55.350232323Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 29 23:59:55.351412 containerd[1621]: time="2025-10-29T23:59:55.351367768Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:55.353903 containerd[1621]: time="2025-10-29T23:59:55.353868919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:55.355023 containerd[1621]: time="2025-10-29T23:59:55.354981746Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.84409331s" Oct 29 23:59:55.355060 containerd[1621]: time="2025-10-29T23:59:55.355033674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 29 23:59:55.356208 containerd[1621]: time="2025-10-29T23:59:55.356185366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 23:59:56.908021 containerd[1621]: time="2025-10-29T23:59:56.907945143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:56.908707 containerd[1621]: time="2025-10-29T23:59:56.908638327Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 29 23:59:56.909864 containerd[1621]: time="2025-10-29T23:59:56.909822371Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:56.912837 containerd[1621]: time="2025-10-29T23:59:56.912798367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:56.914040 containerd[1621]: time="2025-10-29T23:59:56.913993750Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.557779955s" Oct 29 23:59:56.914040 containerd[1621]: time="2025-10-29T23:59:56.914033387Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 29 23:59:56.914740 containerd[1621]: time="2025-10-29T23:59:56.914711106Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 23:59:58.937878 containerd[1621]: time="2025-10-29T23:59:58.937803465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:58.938900 containerd[1621]: time="2025-10-29T23:59:58.938861634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 29 23:59:58.940348 containerd[1621]: time="2025-10-29T23:59:58.940277388Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:58.943252 containerd[1621]: time="2025-10-29T23:59:58.943176142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:59:58.944213 containerd[1621]: time="2025-10-29T23:59:58.944141953Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.029399743s" Oct 29 23:59:58.944213 containerd[1621]: time="2025-10-29T23:59:58.944182382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 29 23:59:58.944996 containerd[1621]: time="2025-10-29T23:59:58.944960700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 00:00:00.105409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154395271.mount: Deactivated successfully. Oct 30 00:00:00.961557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 00:00:00.963440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:01.205547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:01.219584 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:00:01.420737 containerd[1621]: time="2025-10-30T00:00:01.420634718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:01.422237 containerd[1621]: time="2025-10-30T00:00:01.422206343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 30 00:00:01.423927 containerd[1621]: time="2025-10-30T00:00:01.423851595Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:01.427075 containerd[1621]: time="2025-10-30T00:00:01.427040513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:01.427716 containerd[1621]: time="2025-10-30T00:00:01.427680342Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.482684321s" Oct 30 00:00:01.427805 containerd[1621]: time="2025-10-30T00:00:01.427717787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 30 00:00:01.428452 containerd[1621]: time="2025-10-30T00:00:01.428410909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 00:00:01.437600 kubelet[2179]: E1030 00:00:01.437526 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:00:01.442825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:00:01.443130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:00:01.443658 systemd[1]: kubelet.service: Consumed 429ms CPU time, 110.6M memory peak. Oct 30 00:00:02.125304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750375829.mount: Deactivated successfully. Oct 30 00:00:02.887577 containerd[1621]: time="2025-10-30T00:00:02.887485213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:02.888434 containerd[1621]: time="2025-10-30T00:00:02.888389043Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 30 00:00:02.889865 containerd[1621]: time="2025-10-30T00:00:02.889812657Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:02.892850 containerd[1621]: time="2025-10-30T00:00:02.892796233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:02.893743 containerd[1621]: time="2025-10-30T00:00:02.893699372Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.465254906s" Oct 30 00:00:02.893743 containerd[1621]: time="2025-10-30T00:00:02.893734033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 30 00:00:02.894352 containerd[1621]: time="2025-10-30T00:00:02.894311436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 00:00:03.416838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349079106.mount: Deactivated successfully. Oct 30 00:00:03.422582 containerd[1621]: time="2025-10-30T00:00:03.422519270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:00:03.423470 containerd[1621]: time="2025-10-30T00:00:03.423394324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 00:00:03.425259 containerd[1621]: time="2025-10-30T00:00:03.425197186Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:00:03.427391 containerd[1621]: time="2025-10-30T00:00:03.427344678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:00:03.428080 containerd[1621]: time="2025-10-30T00:00:03.428038175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.685597ms" Oct 30 00:00:03.428080 containerd[1621]: time="2025-10-30T00:00:03.428073998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 00:00:03.428780 containerd[1621]: time="2025-10-30T00:00:03.428741771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 00:00:04.037341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150935147.mount: Deactivated successfully. Oct 30 00:00:06.988071 containerd[1621]: time="2025-10-30T00:00:06.987994047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:06.989085 containerd[1621]: time="2025-10-30T00:00:06.989000743Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 30 00:00:06.990453 containerd[1621]: time="2025-10-30T00:00:06.990392885Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:06.993671 containerd[1621]: time="2025-10-30T00:00:06.993626138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:06.995068 containerd[1621]: time="2025-10-30T00:00:06.994998085Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.566228284s" Oct 30 00:00:06.995068 containerd[1621]: time="2025-10-30T00:00:06.995040499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 30 00:00:09.531278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:09.531537 systemd[1]: kubelet.service: Consumed 429ms CPU time, 110.6M memory peak. Oct 30 00:00:09.534293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:09.564672 systemd[1]: Reload requested from client PID 2328 ('systemctl') (unit session-9.scope)... Oct 30 00:00:09.564709 systemd[1]: Reloading... Oct 30 00:00:09.685205 zram_generator::config[2370]: No configuration found. Oct 30 00:00:10.126316 systemd[1]: Reloading finished in 561 ms. Oct 30 00:00:10.190066 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:00:10.190235 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:00:10.190642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:10.190702 systemd[1]: kubelet.service: Consumed 189ms CPU time, 98.4M memory peak. Oct 30 00:00:10.192684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:10.459179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:10.481535 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:00:10.525420 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:00:10.525420 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:00:10.525420 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:00:10.525895 kubelet[2419]: I1030 00:00:10.525455 2419 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:00:10.756067 kubelet[2419]: I1030 00:00:10.755998 2419 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 00:00:10.756067 kubelet[2419]: I1030 00:00:10.756029 2419 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:00:10.756389 kubelet[2419]: I1030 00:00:10.756360 2419 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 00:00:10.786744 kubelet[2419]: E1030 00:00:10.786673 2419 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:10.789587 kubelet[2419]: I1030 00:00:10.789521 2419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:00:10.800387 kubelet[2419]: I1030 00:00:10.800336 2419 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:00:10.806647 kubelet[2419]: I1030 00:00:10.806602 2419 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:00:10.807860 kubelet[2419]: I1030 00:00:10.807811 2419 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:00:10.808034 kubelet[2419]: I1030 00:00:10.807848 2419 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:00:10.808259 kubelet[2419]: I1030 00:00:10.808040 2419 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:00:10.808259 kubelet[2419]: I1030 00:00:10.808049 2419 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 00:00:10.808259 kubelet[2419]: I1030 00:00:10.808214 2419 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:00:10.811274 kubelet[2419]: I1030 00:00:10.811247 2419 kubelet.go:446] "Attempting to sync node with API server" Oct 30 00:00:10.811331 kubelet[2419]: I1030 00:00:10.811308 2419 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:00:10.811388 kubelet[2419]: I1030 00:00:10.811351 2419 kubelet.go:352] "Adding apiserver pod source" Oct 30 00:00:10.811388 kubelet[2419]: I1030 00:00:10.811368 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:00:10.816133 kubelet[2419]: I1030 00:00:10.816109 2419 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:00:10.817811 kubelet[2419]: I1030 00:00:10.817784 2419 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 00:00:10.817878 kubelet[2419]: W1030 00:00:10.817867 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:00:10.818490 kubelet[2419]: W1030 00:00:10.818426 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:10.818548 kubelet[2419]: E1030 00:00:10.818501 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:10.819261 kubelet[2419]: W1030 00:00:10.819197 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:10.819314 kubelet[2419]: E1030 00:00:10.819257 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:10.821743 kubelet[2419]: I1030 00:00:10.821712 2419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:00:10.821932 kubelet[2419]: I1030 00:00:10.821894 2419 server.go:1287] "Started kubelet" Oct 30 00:00:10.824042 kubelet[2419]: I1030 00:00:10.823997 2419 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:00:10.824120 kubelet[2419]: I1030 00:00:10.823957 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:00:10.824713 kubelet[2419]: I1030 00:00:10.824676 2419 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:00:10.855953 kubelet[2419]: I1030 00:00:10.855673 2419 server.go:479] "Adding debug handlers to kubelet server" Oct 30 00:00:10.857226 kubelet[2419]: I1030 00:00:10.857182 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:00:10.857290 kubelet[2419]: E1030 00:00:10.855712 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731bc4b08f90a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 00:00:10.821734568 +0000 UTC m=+0.335956290,LastTimestamp:2025-10-30 00:00:10.821734568 +0000 UTC m=+0.335956290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 00:00:10.857531 kubelet[2419]: I1030 00:00:10.857502 2419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:00:10.859490 kubelet[2419]: E1030 00:00:10.859473 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:10.859575 kubelet[2419]: I1030 00:00:10.859565 2419 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:00:10.859819 kubelet[2419]: I1030 00:00:10.859801 2419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:00:10.859943 kubelet[2419]: I1030 00:00:10.859931 2419 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:00:10.860393 kubelet[2419]: W1030 00:00:10.860358 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:10.860491 kubelet[2419]: E1030 00:00:10.860473 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:10.860643 kubelet[2419]: E1030 00:00:10.860621 2419 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:00:10.860720 kubelet[2419]: E1030 00:00:10.860681 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Oct 30 00:00:10.861875 kubelet[2419]: I1030 00:00:10.861858 2419 factory.go:221] Registration of the containerd container factory successfully Oct 30 00:00:10.861945 kubelet[2419]: I1030 00:00:10.861936 2419 factory.go:221] Registration of the systemd container factory successfully Oct 30 00:00:10.862135 kubelet[2419]: I1030 00:00:10.862118 2419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:00:10.878137 kubelet[2419]: I1030 00:00:10.878052 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 00:00:10.878530 kubelet[2419]: I1030 00:00:10.878514 2419 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:00:10.878606 kubelet[2419]: I1030 00:00:10.878525 2419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:00:10.878606 kubelet[2419]: I1030 00:00:10.878568 2419 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:00:10.879882 kubelet[2419]: I1030 00:00:10.879851 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 00:00:10.879882 kubelet[2419]: I1030 00:00:10.879876 2419 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 00:00:10.896210 kubelet[2419]: I1030 00:00:10.879897 2419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:00:10.896210 kubelet[2419]: I1030 00:00:10.879908 2419 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 00:00:10.896210 kubelet[2419]: E1030 00:00:10.879957 2419 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:00:10.960739 kubelet[2419]: E1030 00:00:10.960637 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:10.980899 kubelet[2419]: E1030 00:00:10.980851 2419 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:00:11.061018 kubelet[2419]: E1030 00:00:11.060831 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.061422 kubelet[2419]: E1030 00:00:11.061366 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Oct 30 00:00:11.161902 kubelet[2419]: E1030 00:00:11.161826 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.181013 kubelet[2419]: E1030 00:00:11.180968 2419 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:00:11.262533 kubelet[2419]: E1030 00:00:11.262454 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.362741 kubelet[2419]: E1030 00:00:11.362567 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.462619 kubelet[2419]: E1030 00:00:11.462538 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Oct 30 00:00:11.463537 kubelet[2419]: E1030 00:00:11.463480 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.564008 kubelet[2419]: E1030 00:00:11.563910 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.581194 kubelet[2419]: E1030 00:00:11.581066 2419 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:00:11.664807 kubelet[2419]: E1030 00:00:11.664608 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.681648 kubelet[2419]: W1030 00:00:11.681583 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:11.681648 kubelet[2419]: E1030 00:00:11.681631 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:11.765388 kubelet[2419]: E1030 00:00:11.765305 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.837127 kubelet[2419]: W1030 00:00:11.837000 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:11.837127 kubelet[2419]: E1030 00:00:11.837126 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:11.854354 kubelet[2419]: I1030 00:00:11.854172 2419 policy_none.go:49] "None policy: Start" Oct 30 00:00:11.854354 kubelet[2419]: I1030 00:00:11.854230 2419 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:00:11.854354 kubelet[2419]: I1030 00:00:11.854248 2419 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:00:11.865757 kubelet[2419]: E1030 00:00:11.865693 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:11.883230 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:00:11.888190 kubelet[2419]: W1030 00:00:11.888161 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:11.888260 kubelet[2419]: E1030 00:00:11.888214 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:11.897872 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:00:11.901822 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:00:11.924182 kubelet[2419]: I1030 00:00:11.924072 2419 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 00:00:11.924722 kubelet[2419]: I1030 00:00:11.924696 2419 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:00:11.924773 kubelet[2419]: I1030 00:00:11.924711 2419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:00:11.924995 kubelet[2419]: I1030 00:00:11.924975 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:00:11.925773 kubelet[2419]: E1030 00:00:11.925750 2419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:00:11.925844 kubelet[2419]: E1030 00:00:11.925811 2419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 00:00:12.026239 kubelet[2419]: I1030 00:00:12.026201 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:00:12.026641 kubelet[2419]: E1030 00:00:12.026599 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 30 00:00:12.209422 kubelet[2419]: W1030 00:00:12.209282 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:12.209422 kubelet[2419]: E1030 00:00:12.209325 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:12.229013 kubelet[2419]: I1030 00:00:12.228984 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:00:12.229429 kubelet[2419]: E1030 00:00:12.229382 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 30 00:00:12.262967 kubelet[2419]: E1030 00:00:12.262937 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Oct 30 00:00:12.391375 systemd[1]: Created slice kubepods-burstable-pod5e7a679ba66702a0abef6fcba7ec8f4f.slice - libcontainer container kubepods-burstable-pod5e7a679ba66702a0abef6fcba7ec8f4f.slice. Oct 30 00:00:12.411505 kubelet[2419]: E1030 00:00:12.411449 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:12.415104 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 30 00:00:12.417991 kubelet[2419]: E1030 00:00:12.417958 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:12.420821 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 30 00:00:12.422639 kubelet[2419]: E1030 00:00:12.422605 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:12.470049 kubelet[2419]: I1030 00:00:12.469980 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:12.470049 kubelet[2419]: I1030 00:00:12.470036 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e7a679ba66702a0abef6fcba7ec8f4f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e7a679ba66702a0abef6fcba7ec8f4f\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:12.470049 kubelet[2419]: I1030 00:00:12.470059 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:12.470298 kubelet[2419]: I1030 00:00:12.470076 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:12.470298 kubelet[2419]: I1030 00:00:12.470112 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:12.470298 kubelet[2419]: I1030 00:00:12.470136 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:12.470298 kubelet[2419]: I1030 00:00:12.470172 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:12.470298 kubelet[2419]: I1030 00:00:12.470196 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e7a679ba66702a0abef6fcba7ec8f4f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e7a679ba66702a0abef6fcba7ec8f4f\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:12.470418 kubelet[2419]: I1030 00:00:12.470214 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e7a679ba66702a0abef6fcba7ec8f4f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e7a679ba66702a0abef6fcba7ec8f4f\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:12.630995 kubelet[2419]: I1030 00:00:12.630941 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:00:12.631449 kubelet[2419]: E1030 00:00:12.631420 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 30 00:00:12.712264 kubelet[2419]: E1030 00:00:12.712223 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:12.712994 containerd[1621]: time="2025-10-30T00:00:12.712942341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e7a679ba66702a0abef6fcba7ec8f4f,Namespace:kube-system,Attempt:0,}" Oct 30 00:00:12.719198 kubelet[2419]: E1030 00:00:12.719152 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:12.719614 containerd[1621]: time="2025-10-30T00:00:12.719573554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 30 00:00:12.723943 kubelet[2419]: E1030 00:00:12.723847 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:12.724166 containerd[1621]: time="2025-10-30T00:00:12.724138649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 30 00:00:12.791194 kubelet[2419]: E1030 00:00:12.791144 2419 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:13.017696 containerd[1621]: time="2025-10-30T00:00:13.017407837Z" level=info msg="connecting to shim 4958be51af6e892e998633e15e4f11a69a4ef4c8043b2017eca2b115ef6e4c7b" address="unix:///run/containerd/s/f20becdcad075546bbc554b06ff8c1fbc90a049b817187d58a34a2a482d81cdb" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:13.018572 containerd[1621]: time="2025-10-30T00:00:13.018531295Z" level=info msg="connecting to shim ecdefd8c4a36e67547259c4c2eaf769fc668da8802bc92eb8a39f823562e6d66" address="unix:///run/containerd/s/dd5771f330b3b67426162d96ee19469b313d7d6711e3da13879e4bea8d8a3697" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:13.018979 containerd[1621]: time="2025-10-30T00:00:13.018957303Z" level=info msg="connecting to shim e37c5f9981b658c87b2450bad95fd868980564f8f26466750e0789e8ae553214" address="unix:///run/containerd/s/7b542f19fe131d6a466ff0f6bea3fe60e696ae7a8f7ccd6daf78ee70bab9237c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:13.090647 systemd[1]: Started cri-containerd-4958be51af6e892e998633e15e4f11a69a4ef4c8043b2017eca2b115ef6e4c7b.scope - libcontainer container 4958be51af6e892e998633e15e4f11a69a4ef4c8043b2017eca2b115ef6e4c7b. Oct 30 00:00:13.118353 systemd[1]: Started cri-containerd-ecdefd8c4a36e67547259c4c2eaf769fc668da8802bc92eb8a39f823562e6d66.scope - libcontainer container ecdefd8c4a36e67547259c4c2eaf769fc668da8802bc92eb8a39f823562e6d66. Oct 30 00:00:13.124261 systemd[1]: Started cri-containerd-e37c5f9981b658c87b2450bad95fd868980564f8f26466750e0789e8ae553214.scope - libcontainer container e37c5f9981b658c87b2450bad95fd868980564f8f26466750e0789e8ae553214. Oct 30 00:00:13.290145 containerd[1621]: time="2025-10-30T00:00:13.289642888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4958be51af6e892e998633e15e4f11a69a4ef4c8043b2017eca2b115ef6e4c7b\"" Oct 30 00:00:13.290940 kubelet[2419]: E1030 00:00:13.290898 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:13.292824 containerd[1621]: time="2025-10-30T00:00:13.292787120Z" level=info msg="CreateContainer within sandbox \"4958be51af6e892e998633e15e4f11a69a4ef4c8043b2017eca2b115ef6e4c7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:00:13.296978 containerd[1621]: time="2025-10-30T00:00:13.296928946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e7a679ba66702a0abef6fcba7ec8f4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e37c5f9981b658c87b2450bad95fd868980564f8f26466750e0789e8ae553214\"" Oct 30 00:00:13.297617 kubelet[2419]: E1030 00:00:13.297578 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:13.298434 containerd[1621]: time="2025-10-30T00:00:13.298391638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecdefd8c4a36e67547259c4c2eaf769fc668da8802bc92eb8a39f823562e6d66\"" Oct 30 00:00:13.299178 kubelet[2419]: E1030 00:00:13.299139 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:13.299685 containerd[1621]: time="2025-10-30T00:00:13.299626834Z" level=info msg="CreateContainer within sandbox \"e37c5f9981b658c87b2450bad95fd868980564f8f26466750e0789e8ae553214\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:00:13.300753 containerd[1621]: time="2025-10-30T00:00:13.300710051Z" level=info msg="CreateContainer within sandbox \"ecdefd8c4a36e67547259c4c2eaf769fc668da8802bc92eb8a39f823562e6d66\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:00:13.311134 containerd[1621]: time="2025-10-30T00:00:13.311035785Z" level=info msg="Container 8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:13.319360 containerd[1621]: time="2025-10-30T00:00:13.319278535Z" level=info msg="Container b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:13.326209 containerd[1621]: time="2025-10-30T00:00:13.326149945Z" level=info msg="Container c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:13.330438 containerd[1621]: time="2025-10-30T00:00:13.330339757Z" level=info msg="CreateContainer within sandbox \"4958be51af6e892e998633e15e4f11a69a4ef4c8043b2017eca2b115ef6e4c7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af\"" Oct 30 00:00:13.331379 containerd[1621]: time="2025-10-30T00:00:13.331320611Z" level=info msg="StartContainer for \"8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af\"" Oct 30 00:00:13.333001 containerd[1621]: time="2025-10-30T00:00:13.332966649Z" level=info msg="connecting to shim 8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af" address="unix:///run/containerd/s/f20becdcad075546bbc554b06ff8c1fbc90a049b817187d58a34a2a482d81cdb" protocol=ttrpc version=3 Oct 30 00:00:13.333670 containerd[1621]: time="2025-10-30T00:00:13.333628947Z" level=info msg="CreateContainer within sandbox \"e37c5f9981b658c87b2450bad95fd868980564f8f26466750e0789e8ae553214\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80\"" Oct 30 00:00:13.334270 containerd[1621]: time="2025-10-30T00:00:13.334207235Z" level=info msg="StartContainer for \"b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80\"" Oct 30 00:00:13.335501 containerd[1621]: time="2025-10-30T00:00:13.335416847Z" level=info msg="connecting to shim b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80" address="unix:///run/containerd/s/7b542f19fe131d6a466ff0f6bea3fe60e696ae7a8f7ccd6daf78ee70bab9237c" protocol=ttrpc version=3 Oct 30 00:00:13.342127 containerd[1621]: time="2025-10-30T00:00:13.341502750Z" level=info msg="CreateContainer within sandbox \"ecdefd8c4a36e67547259c4c2eaf769fc668da8802bc92eb8a39f823562e6d66\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb\"" Oct 30 00:00:13.342461 containerd[1621]: time="2025-10-30T00:00:13.342430620Z" level=info msg="StartContainer for \"c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb\"" Oct 30 00:00:13.342707 kubelet[2419]: W1030 00:00:13.342626 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Oct 30 00:00:13.342788 kubelet[2419]: E1030 00:00:13.342726 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:00:13.344758 containerd[1621]: time="2025-10-30T00:00:13.344724530Z" level=info msg="connecting to shim c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb" address="unix:///run/containerd/s/dd5771f330b3b67426162d96ee19469b313d7d6711e3da13879e4bea8d8a3697" protocol=ttrpc version=3 Oct 30 00:00:13.378406 systemd[1]: Started cri-containerd-8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af.scope - libcontainer container 8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af. Oct 30 00:00:13.395372 systemd[1]: Started cri-containerd-b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80.scope - libcontainer container b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80. Oct 30 00:00:13.398411 systemd[1]: Started cri-containerd-c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb.scope - libcontainer container c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb. Oct 30 00:00:13.433584 kubelet[2419]: I1030 00:00:13.433286 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:00:13.433844 kubelet[2419]: E1030 00:00:13.433818 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Oct 30 00:00:13.497193 containerd[1621]: time="2025-10-30T00:00:13.491296126Z" level=info msg="StartContainer for \"c5bf8b1779b0a9240fc73673e3447a40b769481a04fe954891527cf14b3238eb\" returns successfully" Oct 30 00:00:13.523004 containerd[1621]: time="2025-10-30T00:00:13.522927173Z" level=info msg="StartContainer for \"8318c46404281f6ee8365eda9e727b7a53b6096aea7b24f1d905753ce7b329af\" returns successfully" Oct 30 00:00:13.548405 containerd[1621]: time="2025-10-30T00:00:13.545390102Z" level=info msg="StartContainer for \"b0b30c401954533b00d892dc824d9ff21e8b5e6e53a6572ca401a3a842ac3c80\" returns successfully" Oct 30 00:00:13.908205 kubelet[2419]: E1030 00:00:13.907742 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:13.908205 kubelet[2419]: E1030 00:00:13.907884 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:13.911520 kubelet[2419]: E1030 00:00:13.911488 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:13.911627 kubelet[2419]: E1030 00:00:13.911604 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:13.912374 kubelet[2419]: E1030 00:00:13.912348 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:13.912485 kubelet[2419]: E1030 00:00:13.912464 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:14.914565 kubelet[2419]: E1030 00:00:14.914515 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:14.915172 kubelet[2419]: E1030 00:00:14.914635 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:14.915401 kubelet[2419]: E1030 00:00:14.915378 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:00:14.915519 kubelet[2419]: E1030 00:00:14.915496 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:15.035950 kubelet[2419]: I1030 00:00:15.035869 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:00:15.059590 kubelet[2419]: E1030 00:00:15.059541 2419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 00:00:15.153399 kubelet[2419]: I1030 00:00:15.153224 2419 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:00:15.153399 kubelet[2419]: E1030 00:00:15.153265 2419 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 00:00:15.160620 kubelet[2419]: I1030 00:00:15.160585 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:15.216677 kubelet[2419]: E1030 00:00:15.216629 2419 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:15.216677 kubelet[2419]: I1030 00:00:15.216662 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:15.219241 kubelet[2419]: E1030 00:00:15.219199 2419 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:15.219241 kubelet[2419]: I1030 00:00:15.219239 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:15.220717 kubelet[2419]: E1030 00:00:15.220691 2419 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:15.857999 kubelet[2419]: I1030 00:00:15.857942 2419 apiserver.go:52] "Watching apiserver" Oct 30 00:00:15.860410 kubelet[2419]: I1030 00:00:15.860357 2419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:00:16.917879 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-9.scope)... Oct 30 00:00:16.917896 systemd[1]: Reloading... Oct 30 00:00:17.004155 zram_generator::config[2741]: No configuration found. Oct 30 00:00:17.251132 systemd[1]: Reloading finished in 332 ms. Oct 30 00:00:17.286633 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:17.301528 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:00:17.301867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:17.301926 systemd[1]: kubelet.service: Consumed 971ms CPU time, 131.6M memory peak. Oct 30 00:00:17.303910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:17.519546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:17.524730 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:00:17.566027 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:00:17.566027 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:00:17.566027 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:00:17.566461 kubelet[2786]: I1030 00:00:17.566153 2786 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:00:17.574048 kubelet[2786]: I1030 00:00:17.573999 2786 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 00:00:17.574048 kubelet[2786]: I1030 00:00:17.574035 2786 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:00:17.574350 kubelet[2786]: I1030 00:00:17.574324 2786 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 00:00:17.575494 kubelet[2786]: I1030 00:00:17.575468 2786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 00:00:17.579128 kubelet[2786]: I1030 00:00:17.579057 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:00:17.584162 kubelet[2786]: I1030 00:00:17.584132 2786 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:00:17.588841 kubelet[2786]: I1030 00:00:17.588807 2786 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:00:17.589085 kubelet[2786]: I1030 00:00:17.589028 2786 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:00:17.589260 kubelet[2786]: I1030 00:00:17.589058 2786 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:00:17.589260 kubelet[2786]: I1030 00:00:17.589258 2786 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:00:17.589431 kubelet[2786]: I1030 00:00:17.589268 2786 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 00:00:17.589431 kubelet[2786]: I1030 00:00:17.589322 2786 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:00:17.589544 kubelet[2786]: I1030 00:00:17.589476 2786 kubelet.go:446] "Attempting to sync node with API server" Oct 30 00:00:17.589544 kubelet[2786]: I1030 00:00:17.589508 2786 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:00:17.589544 kubelet[2786]: I1030 00:00:17.589530 2786 kubelet.go:352] "Adding apiserver pod source" Oct 30 00:00:17.589544 kubelet[2786]: I1030 00:00:17.589539 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:00:17.591064 kubelet[2786]: I1030 00:00:17.590999 2786 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:00:17.591424 kubelet[2786]: I1030 00:00:17.591391 2786 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 00:00:17.591918 kubelet[2786]: I1030 00:00:17.591863 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:00:17.591918 kubelet[2786]: I1030 00:00:17.591910 2786 server.go:1287] "Started kubelet" Oct 30 00:00:17.595567 kubelet[2786]: I1030 00:00:17.595489 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:00:17.597114 kubelet[2786]: I1030 00:00:17.596249 2786 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:00:17.597114 kubelet[2786]: I1030 00:00:17.595733 2786 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:00:17.597114 kubelet[2786]: E1030 00:00:17.596805 2786 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:00:17.597114 kubelet[2786]: I1030 00:00:17.596986 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:00:17.597846 kubelet[2786]: I1030 00:00:17.597828 2786 server.go:479] "Adding debug handlers to kubelet server" Oct 30 00:00:17.606145 kubelet[2786]: I1030 00:00:17.606075 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:00:17.607128 kubelet[2786]: I1030 00:00:17.606837 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:00:17.607128 kubelet[2786]: E1030 00:00:17.606957 2786 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:00:17.609024 kubelet[2786]: I1030 00:00:17.608179 2786 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:00:17.609024 kubelet[2786]: I1030 00:00:17.608258 2786 factory.go:221] Registration of the systemd container factory successfully Oct 30 00:00:17.609024 kubelet[2786]: I1030 00:00:17.608347 2786 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:00:17.609024 kubelet[2786]: I1030 00:00:17.608406 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:00:17.611726 kubelet[2786]: I1030 00:00:17.610653 2786 factory.go:221] Registration of the containerd container factory successfully Oct 30 00:00:17.615051 kubelet[2786]: I1030 00:00:17.615000 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 00:00:17.616592 kubelet[2786]: I1030 00:00:17.616559 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 00:00:17.616660 kubelet[2786]: I1030 00:00:17.616597 2786 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 00:00:17.616660 kubelet[2786]: I1030 00:00:17.616630 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:00:17.616660 kubelet[2786]: I1030 00:00:17.616646 2786 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 00:00:17.616825 kubelet[2786]: E1030 00:00:17.616769 2786 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:00:17.654522 kubelet[2786]: I1030 00:00:17.654471 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:00:17.654722 kubelet[2786]: I1030 00:00:17.654683 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:00:17.654722 kubelet[2786]: I1030 00:00:17.654714 2786 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:00:17.654987 kubelet[2786]: I1030 00:00:17.654912 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:00:17.654987 kubelet[2786]: I1030 00:00:17.654925 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:00:17.654987 kubelet[2786]: I1030 00:00:17.654946 2786 policy_none.go:49] "None policy: Start" Oct 30 00:00:17.654987 kubelet[2786]: I1030 00:00:17.654957 2786 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:00:17.654987 kubelet[2786]: I1030 00:00:17.654972 2786 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:00:17.655234 kubelet[2786]: I1030 00:00:17.655142 2786 state_mem.go:75] "Updated machine memory state" Oct 30 00:00:17.660299 kubelet[2786]: I1030 00:00:17.660268 2786 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 00:00:17.660639 kubelet[2786]: I1030 00:00:17.660537 2786 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:00:17.660689 kubelet[2786]: I1030 00:00:17.660656 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:00:17.660862 kubelet[2786]: I1030 00:00:17.660833 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:00:17.661808 kubelet[2786]: E1030 00:00:17.661782 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:00:17.718368 kubelet[2786]: I1030 00:00:17.718317 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:17.718603 kubelet[2786]: I1030 00:00:17.718562 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:17.718695 kubelet[2786]: I1030 00:00:17.718621 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:17.767862 kubelet[2786]: I1030 00:00:17.767822 2786 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:00:17.880644 kubelet[2786]: I1030 00:00:17.880484 2786 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 00:00:17.880644 kubelet[2786]: I1030 00:00:17.880608 2786 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:00:17.909603 kubelet[2786]: I1030 00:00:17.909536 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e7a679ba66702a0abef6fcba7ec8f4f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e7a679ba66702a0abef6fcba7ec8f4f\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:17.909603 kubelet[2786]: I1030 00:00:17.909573 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:17.909603 kubelet[2786]: I1030 00:00:17.909594 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:17.909603 kubelet[2786]: I1030 00:00:17.909615 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:17.909603 kubelet[2786]: I1030 00:00:17.909632 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e7a679ba66702a0abef6fcba7ec8f4f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e7a679ba66702a0abef6fcba7ec8f4f\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:17.910016 kubelet[2786]: I1030 00:00:17.909649 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:17.910016 kubelet[2786]: I1030 00:00:17.909668 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:17.910016 kubelet[2786]: I1030 00:00:17.909685 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:00:17.910016 kubelet[2786]: I1030 00:00:17.909746 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e7a679ba66702a0abef6fcba7ec8f4f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e7a679ba66702a0abef6fcba7ec8f4f\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:18.068982 kubelet[2786]: E1030 00:00:18.068929 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:18.069150 kubelet[2786]: E1030 00:00:18.068994 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:18.069150 kubelet[2786]: E1030 00:00:18.069002 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:18.589987 kubelet[2786]: I1030 00:00:18.589946 2786 apiserver.go:52] "Watching apiserver" Oct 30 00:00:18.609004 kubelet[2786]: I1030 00:00:18.608942 2786 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:00:18.632845 kubelet[2786]: I1030 00:00:18.632804 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:18.632900 kubelet[2786]: I1030 00:00:18.632849 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:18.633237 kubelet[2786]: E1030 00:00:18.633214 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:19.153111 kubelet[2786]: E1030 00:00:19.152760 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 00:00:19.153111 kubelet[2786]: E1030 00:00:19.152811 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 00:00:19.153111 kubelet[2786]: E1030 00:00:19.152999 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:19.153111 kubelet[2786]: E1030 00:00:19.153007 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:19.391943 kubelet[2786]: I1030 00:00:19.391862 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.391832975 podStartE2EDuration="2.391832975s" podCreationTimestamp="2025-10-30 00:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:00:19.373282503 +0000 UTC m=+1.844746056" watchObservedRunningTime="2025-10-30 00:00:19.391832975 +0000 UTC m=+1.863300886" Oct 30 00:00:19.403392 kubelet[2786]: I1030 00:00:19.403238 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.402798021 podStartE2EDuration="2.402798021s" podCreationTimestamp="2025-10-30 00:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:00:19.392040468 +0000 UTC m=+1.863504011" watchObservedRunningTime="2025-10-30 00:00:19.402798021 +0000 UTC m=+1.874261554" Oct 30 00:00:19.413381 kubelet[2786]: I1030 00:00:19.413317 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.413285949 podStartE2EDuration="2.413285949s" podCreationTimestamp="2025-10-30 00:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:00:19.403399811 +0000 UTC m=+1.874863354" watchObservedRunningTime="2025-10-30 00:00:19.413285949 +0000 UTC m=+1.884749492" Oct 30 00:00:19.635817 kubelet[2786]: E1030 00:00:19.635215 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:19.635817 kubelet[2786]: E1030 00:00:19.635384 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:20.635848 kubelet[2786]: E1030 00:00:20.635807 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:20.636402 kubelet[2786]: E1030 00:00:20.635989 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:21.422324 kubelet[2786]: E1030 00:00:21.422274 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:22.849522 update_engine[1597]: I20251030 00:00:22.849393 1597 update_attempter.cc:509] Updating boot flags... Oct 30 00:00:23.897542 kubelet[2786]: I1030 00:00:23.897491 2786 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:00:23.898076 containerd[1621]: time="2025-10-30T00:00:23.898031341Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:00:23.898357 kubelet[2786]: I1030 00:00:23.898307 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:00:24.560499 systemd[1]: Created slice kubepods-besteffort-pod0801b065_10cc_4a08_b866_9aae0d787d76.slice - libcontainer container kubepods-besteffort-pod0801b065_10cc_4a08_b866_9aae0d787d76.slice. Oct 30 00:00:24.653235 kubelet[2786]: I1030 00:00:24.653188 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0801b065-10cc-4a08-b866-9aae0d787d76-xtables-lock\") pod \"kube-proxy-fcx5z\" (UID: \"0801b065-10cc-4a08-b866-9aae0d787d76\") " pod="kube-system/kube-proxy-fcx5z" Oct 30 00:00:24.653235 kubelet[2786]: I1030 00:00:24.653229 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0801b065-10cc-4a08-b866-9aae0d787d76-lib-modules\") pod \"kube-proxy-fcx5z\" (UID: \"0801b065-10cc-4a08-b866-9aae0d787d76\") " pod="kube-system/kube-proxy-fcx5z" Oct 30 00:00:24.653235 kubelet[2786]: I1030 00:00:24.653255 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0801b065-10cc-4a08-b866-9aae0d787d76-kube-proxy\") pod \"kube-proxy-fcx5z\" (UID: \"0801b065-10cc-4a08-b866-9aae0d787d76\") " pod="kube-system/kube-proxy-fcx5z" Oct 30 00:00:24.653507 kubelet[2786]: I1030 00:00:24.653276 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z878\" (UniqueName: \"kubernetes.io/projected/0801b065-10cc-4a08-b866-9aae0d787d76-kube-api-access-8z878\") pod \"kube-proxy-fcx5z\" (UID: \"0801b065-10cc-4a08-b866-9aae0d787d76\") " pod="kube-system/kube-proxy-fcx5z" Oct 30 00:00:24.872293 kubelet[2786]: E1030 00:00:24.872165 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:24.872942 containerd[1621]: time="2025-10-30T00:00:24.872901421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fcx5z,Uid:0801b065-10cc-4a08-b866-9aae0d787d76,Namespace:kube-system,Attempt:0,}" Oct 30 00:00:24.897033 containerd[1621]: time="2025-10-30T00:00:24.896991973Z" level=info msg="connecting to shim a94436116eec0684e05cb6c9051e954c8d2a11c3ae440f5740c117b9fc708fb7" address="unix:///run/containerd/s/37f65b9c5be9b16a16357044a5b430c860ee04094f826be4adafb2b19730171b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:24.924325 systemd[1]: Started cri-containerd-a94436116eec0684e05cb6c9051e954c8d2a11c3ae440f5740c117b9fc708fb7.scope - libcontainer container a94436116eec0684e05cb6c9051e954c8d2a11c3ae440f5740c117b9fc708fb7. Oct 30 00:00:24.953954 containerd[1621]: time="2025-10-30T00:00:24.953910768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fcx5z,Uid:0801b065-10cc-4a08-b866-9aae0d787d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"a94436116eec0684e05cb6c9051e954c8d2a11c3ae440f5740c117b9fc708fb7\"" Oct 30 00:00:24.954791 kubelet[2786]: E1030 00:00:24.954767 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:24.957161 containerd[1621]: time="2025-10-30T00:00:24.956930235Z" level=info msg="CreateContainer within sandbox \"a94436116eec0684e05cb6c9051e954c8d2a11c3ae440f5740c117b9fc708fb7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:00:24.970154 containerd[1621]: time="2025-10-30T00:00:24.970119869Z" level=info msg="Container 65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:24.979001 containerd[1621]: time="2025-10-30T00:00:24.978957558Z" level=info msg="CreateContainer within sandbox \"a94436116eec0684e05cb6c9051e954c8d2a11c3ae440f5740c117b9fc708fb7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6\"" Oct 30 00:00:24.979968 containerd[1621]: time="2025-10-30T00:00:24.979940104Z" level=info msg="StartContainer for \"65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6\"" Oct 30 00:00:24.981774 containerd[1621]: time="2025-10-30T00:00:24.981743783Z" level=info msg="connecting to shim 65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6" address="unix:///run/containerd/s/37f65b9c5be9b16a16357044a5b430c860ee04094f826be4adafb2b19730171b" protocol=ttrpc version=3 Oct 30 00:00:25.011328 systemd[1]: Started cri-containerd-65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6.scope - libcontainer container 65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6. Oct 30 00:00:25.022986 systemd[1]: Created slice kubepods-besteffort-pod7b2a2382_39d2_4641_bc93_1de8152ece40.slice - libcontainer container kubepods-besteffort-pod7b2a2382_39d2_4641_bc93_1de8152ece40.slice. Oct 30 00:00:25.055867 kubelet[2786]: I1030 00:00:25.055826 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b2a2382-39d2-4641-bc93-1de8152ece40-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r89q7\" (UID: \"7b2a2382-39d2-4641-bc93-1de8152ece40\") " pod="tigera-operator/tigera-operator-7dcd859c48-r89q7" Oct 30 00:00:25.056125 kubelet[2786]: I1030 00:00:25.056043 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgqm\" (UniqueName: \"kubernetes.io/projected/7b2a2382-39d2-4641-bc93-1de8152ece40-kube-api-access-qtgqm\") pod \"tigera-operator-7dcd859c48-r89q7\" (UID: \"7b2a2382-39d2-4641-bc93-1de8152ece40\") " pod="tigera-operator/tigera-operator-7dcd859c48-r89q7" Oct 30 00:00:25.063353 containerd[1621]: time="2025-10-30T00:00:25.063253176Z" level=info msg="StartContainer for \"65ac61028b3ecebd9f48a1ba2b19a6f69e3311287c1c3fd5846a6e5247dbdfd6\" returns successfully" Oct 30 00:00:25.327687 containerd[1621]: time="2025-10-30T00:00:25.327605716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r89q7,Uid:7b2a2382-39d2-4641-bc93-1de8152ece40,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:00:25.380396 containerd[1621]: time="2025-10-30T00:00:25.380316829Z" level=info msg="connecting to shim 681126eab29c63cfc39a844218f9b3ad2885a054b25fc6e8d8f7ec9a70259afc" address="unix:///run/containerd/s/178e96ef0e02272f781166fd7ce04e330df6260fe7a6767d297f7b58c87ad77c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:25.429263 systemd[1]: Started cri-containerd-681126eab29c63cfc39a844218f9b3ad2885a054b25fc6e8d8f7ec9a70259afc.scope - libcontainer container 681126eab29c63cfc39a844218f9b3ad2885a054b25fc6e8d8f7ec9a70259afc. Oct 30 00:00:25.480833 containerd[1621]: time="2025-10-30T00:00:25.480782549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r89q7,Uid:7b2a2382-39d2-4641-bc93-1de8152ece40,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"681126eab29c63cfc39a844218f9b3ad2885a054b25fc6e8d8f7ec9a70259afc\"" Oct 30 00:00:25.486712 containerd[1621]: time="2025-10-30T00:00:25.486662448Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:00:25.647080 kubelet[2786]: E1030 00:00:25.646936 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:25.657959 kubelet[2786]: I1030 00:00:25.657867 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fcx5z" podStartSLOduration=1.657843604 podStartE2EDuration="1.657843604s" podCreationTimestamp="2025-10-30 00:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:00:25.657651176 +0000 UTC m=+8.129114719" watchObservedRunningTime="2025-10-30 00:00:25.657843604 +0000 UTC m=+8.129307147" Oct 30 00:00:28.343412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385798472.mount: Deactivated successfully. Oct 30 00:00:28.694312 containerd[1621]: time="2025-10-30T00:00:28.694159487Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:28.695042 containerd[1621]: time="2025-10-30T00:00:28.695019267Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:00:28.696362 containerd[1621]: time="2025-10-30T00:00:28.696283653Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:28.698326 containerd[1621]: time="2025-10-30T00:00:28.698286177Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:28.698944 containerd[1621]: time="2025-10-30T00:00:28.698889432Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.212175592s" Oct 30 00:00:28.698944 containerd[1621]: time="2025-10-30T00:00:28.698937941Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:00:28.701205 containerd[1621]: time="2025-10-30T00:00:28.701145277Z" level=info msg="CreateContainer within sandbox \"681126eab29c63cfc39a844218f9b3ad2885a054b25fc6e8d8f7ec9a70259afc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:00:28.715022 containerd[1621]: time="2025-10-30T00:00:28.714977205Z" level=info msg="Container f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:28.718450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744211524.mount: Deactivated successfully. Oct 30 00:00:28.722636 containerd[1621]: time="2025-10-30T00:00:28.722591817Z" level=info msg="CreateContainer within sandbox \"681126eab29c63cfc39a844218f9b3ad2885a054b25fc6e8d8f7ec9a70259afc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937\"" Oct 30 00:00:28.722994 containerd[1621]: time="2025-10-30T00:00:28.722930992Z" level=info msg="StartContainer for \"f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937\"" Oct 30 00:00:28.723763 containerd[1621]: time="2025-10-30T00:00:28.723734431Z" level=info msg="connecting to shim f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937" address="unix:///run/containerd/s/178e96ef0e02272f781166fd7ce04e330df6260fe7a6767d297f7b58c87ad77c" protocol=ttrpc version=3 Oct 30 00:00:28.759403 systemd[1]: Started cri-containerd-f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937.scope - libcontainer container f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937. Oct 30 00:00:28.791949 containerd[1621]: time="2025-10-30T00:00:28.791904151Z" level=info msg="StartContainer for \"f3f336ca0dd660efd998e519e15756dbe18a9084cc1c99ddbf9f7e8c64164937\" returns successfully" Oct 30 00:00:29.666706 kubelet[2786]: I1030 00:00:29.666629 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r89q7" podStartSLOduration=2.450204773 podStartE2EDuration="5.666614s" podCreationTimestamp="2025-10-30 00:00:24 +0000 UTC" firstStartedPulling="2025-10-30 00:00:25.483304973 +0000 UTC m=+7.954768516" lastFinishedPulling="2025-10-30 00:00:28.6997142 +0000 UTC m=+11.171177743" observedRunningTime="2025-10-30 00:00:29.66657019 +0000 UTC m=+12.138033733" watchObservedRunningTime="2025-10-30 00:00:29.666614 +0000 UTC m=+12.138077543" Oct 30 00:00:29.784909 kubelet[2786]: E1030 00:00:29.784858 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:30.454194 kubelet[2786]: E1030 00:00:30.454143 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:30.658731 kubelet[2786]: E1030 00:00:30.658525 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:31.429706 kubelet[2786]: E1030 00:00:31.429652 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:34.399837 sudo[1839]: pam_unix(sudo:session): session closed for user root Oct 30 00:00:34.401781 sshd[1838]: Connection closed by 10.0.0.1 port 35864 Oct 30 00:00:34.402703 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Oct 30 00:00:34.408485 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:35864.service: Deactivated successfully. Oct 30 00:00:34.411933 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:00:34.412254 systemd[1]: session-9.scope: Consumed 5.074s CPU time, 218.7M memory peak. Oct 30 00:00:34.415700 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:00:34.416698 systemd-logind[1592]: Removed session 9. Oct 30 00:00:38.869131 systemd[1]: Created slice kubepods-besteffort-pod8c4cd9d2_8f28_492f_9d66_ce7aaae7161a.slice - libcontainer container kubepods-besteffort-pod8c4cd9d2_8f28_492f_9d66_ce7aaae7161a.slice. Oct 30 00:00:38.947802 kubelet[2786]: I1030 00:00:38.947727 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c4cd9d2-8f28-492f-9d66-ce7aaae7161a-typha-certs\") pod \"calico-typha-789c84df59-tl6dz\" (UID: \"8c4cd9d2-8f28-492f-9d66-ce7aaae7161a\") " pod="calico-system/calico-typha-789c84df59-tl6dz" Oct 30 00:00:38.947802 kubelet[2786]: I1030 00:00:38.947799 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdrv\" (UniqueName: \"kubernetes.io/projected/8c4cd9d2-8f28-492f-9d66-ce7aaae7161a-kube-api-access-sqdrv\") pod \"calico-typha-789c84df59-tl6dz\" (UID: \"8c4cd9d2-8f28-492f-9d66-ce7aaae7161a\") " pod="calico-system/calico-typha-789c84df59-tl6dz" Oct 30 00:00:38.947802 kubelet[2786]: I1030 00:00:38.947823 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4cd9d2-8f28-492f-9d66-ce7aaae7161a-tigera-ca-bundle\") pod \"calico-typha-789c84df59-tl6dz\" (UID: \"8c4cd9d2-8f28-492f-9d66-ce7aaae7161a\") " pod="calico-system/calico-typha-789c84df59-tl6dz" Oct 30 00:00:39.052345 systemd[1]: Created slice kubepods-besteffort-pod4e96c40c_cb26_49f0_ab88_8061cb3d38a4.slice - libcontainer container kubepods-besteffort-pod4e96c40c_cb26_49f0_ab88_8061cb3d38a4.slice. Oct 30 00:00:39.148986 kubelet[2786]: I1030 00:00:39.148817 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-cni-net-dir\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.148986 kubelet[2786]: I1030 00:00:39.148867 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-var-lib-calico\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.148986 kubelet[2786]: I1030 00:00:39.148883 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-xtables-lock\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.148986 kubelet[2786]: I1030 00:00:39.148907 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-tigera-ca-bundle\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.148986 kubelet[2786]: I1030 00:00:39.148923 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tfk\" (UniqueName: \"kubernetes.io/projected/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-kube-api-access-b5tfk\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149292 kubelet[2786]: I1030 00:00:39.148945 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-cni-log-dir\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149292 kubelet[2786]: I1030 00:00:39.148962 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-policysync\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149292 kubelet[2786]: I1030 00:00:39.148994 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-flexvol-driver-host\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149292 kubelet[2786]: I1030 00:00:39.149056 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-lib-modules\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149292 kubelet[2786]: I1030 00:00:39.149083 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-node-certs\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149448 kubelet[2786]: I1030 00:00:39.149139 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-cni-bin-dir\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.149448 kubelet[2786]: I1030 00:00:39.149158 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e96c40c-cb26-49f0-ab88-8061cb3d38a4-var-run-calico\") pod \"calico-node-877q2\" (UID: \"4e96c40c-cb26-49f0-ab88-8061cb3d38a4\") " pod="calico-system/calico-node-877q2" Oct 30 00:00:39.172313 kubelet[2786]: E1030 00:00:39.172255 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:39.173067 containerd[1621]: time="2025-10-30T00:00:39.172986028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789c84df59-tl6dz,Uid:8c4cd9d2-8f28-492f-9d66-ce7aaae7161a,Namespace:calico-system,Attempt:0,}" Oct 30 00:00:39.200848 containerd[1621]: time="2025-10-30T00:00:39.200692372Z" level=info msg="connecting to shim c077f9f0f0769c8b9a3d95e6cec93cf31b674c03ef231e07637c54687dd6cd10" address="unix:///run/containerd/s/a62124f30a9de6ac87444ccb32389d92cc26bb169285f030fb67766815b1d7aa" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:39.231381 systemd[1]: Started cri-containerd-c077f9f0f0769c8b9a3d95e6cec93cf31b674c03ef231e07637c54687dd6cd10.scope - libcontainer container c077f9f0f0769c8b9a3d95e6cec93cf31b674c03ef231e07637c54687dd6cd10. Oct 30 00:00:39.254857 kubelet[2786]: E1030 00:00:39.254796 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:39.263538 kubelet[2786]: E1030 00:00:39.263427 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.263538 kubelet[2786]: W1030 00:00:39.263453 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.263538 kubelet[2786]: E1030 00:00:39.263496 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.267206 kubelet[2786]: E1030 00:00:39.267186 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.267366 kubelet[2786]: W1030 00:00:39.267312 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.267366 kubelet[2786]: E1030 00:00:39.267333 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.326869 containerd[1621]: time="2025-10-30T00:00:39.326798344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789c84df59-tl6dz,Uid:8c4cd9d2-8f28-492f-9d66-ce7aaae7161a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c077f9f0f0769c8b9a3d95e6cec93cf31b674c03ef231e07637c54687dd6cd10\"" Oct 30 00:00:39.328109 kubelet[2786]: E1030 00:00:39.328071 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:39.329557 containerd[1621]: time="2025-10-30T00:00:39.329509564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:00:39.332500 kubelet[2786]: E1030 00:00:39.332474 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.332500 kubelet[2786]: W1030 00:00:39.332498 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.332616 kubelet[2786]: E1030 00:00:39.332520 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.332767 kubelet[2786]: E1030 00:00:39.332751 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.332767 kubelet[2786]: W1030 00:00:39.332764 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.332837 kubelet[2786]: E1030 00:00:39.332775 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.332963 kubelet[2786]: E1030 00:00:39.332948 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.332963 kubelet[2786]: W1030 00:00:39.332960 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.333043 kubelet[2786]: E1030 00:00:39.332970 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.333296 kubelet[2786]: E1030 00:00:39.333276 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.333296 kubelet[2786]: W1030 00:00:39.333292 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.333296 kubelet[2786]: E1030 00:00:39.333305 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.333558 kubelet[2786]: E1030 00:00:39.333529 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.333558 kubelet[2786]: W1030 00:00:39.333557 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.333619 kubelet[2786]: E1030 00:00:39.333570 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.333828 kubelet[2786]: E1030 00:00:39.333808 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.333828 kubelet[2786]: W1030 00:00:39.333824 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.333902 kubelet[2786]: E1030 00:00:39.333835 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.334114 kubelet[2786]: E1030 00:00:39.334084 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.334156 kubelet[2786]: W1030 00:00:39.334127 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.334156 kubelet[2786]: E1030 00:00:39.334142 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.334496 kubelet[2786]: E1030 00:00:39.334385 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.334496 kubelet[2786]: W1030 00:00:39.334399 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.334496 kubelet[2786]: E1030 00:00:39.334411 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.335157 kubelet[2786]: E1030 00:00:39.334692 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.335157 kubelet[2786]: W1030 00:00:39.334704 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.335157 kubelet[2786]: E1030 00:00:39.334715 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.335157 kubelet[2786]: E1030 00:00:39.334895 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.335157 kubelet[2786]: W1030 00:00:39.334904 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.335157 kubelet[2786]: E1030 00:00:39.334923 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.335157 kubelet[2786]: E1030 00:00:39.335089 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.335157 kubelet[2786]: W1030 00:00:39.335122 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.335157 kubelet[2786]: E1030 00:00:39.335131 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.335302 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.336254 kubelet[2786]: W1030 00:00:39.335324 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.335336 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.335529 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.336254 kubelet[2786]: W1030 00:00:39.335547 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.335557 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.335769 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.336254 kubelet[2786]: W1030 00:00:39.335778 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.335788 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.336254 kubelet[2786]: E1030 00:00:39.336041 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.337235 kubelet[2786]: W1030 00:00:39.336051 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.337235 kubelet[2786]: E1030 00:00:39.336062 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.337235 kubelet[2786]: E1030 00:00:39.336293 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.337235 kubelet[2786]: W1030 00:00:39.336303 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.337235 kubelet[2786]: E1030 00:00:39.336313 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.337235 kubelet[2786]: E1030 00:00:39.336557 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.337235 kubelet[2786]: W1030 00:00:39.336566 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.337235 kubelet[2786]: E1030 00:00:39.336585 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.337235 kubelet[2786]: E1030 00:00:39.336773 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.337235 kubelet[2786]: W1030 00:00:39.336782 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.337442 kubelet[2786]: E1030 00:00:39.336791 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.337442 kubelet[2786]: E1030 00:00:39.336986 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.337442 kubelet[2786]: W1030 00:00:39.336995 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.337442 kubelet[2786]: E1030 00:00:39.337005 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.337442 kubelet[2786]: E1030 00:00:39.337262 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.337442 kubelet[2786]: W1030 00:00:39.337273 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.337442 kubelet[2786]: E1030 00:00:39.337282 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.352000 kubelet[2786]: E1030 00:00:39.351346 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.352000 kubelet[2786]: W1030 00:00:39.351381 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.352000 kubelet[2786]: E1030 00:00:39.351431 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.352000 kubelet[2786]: I1030 00:00:39.351490 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68445617-ec60-49e4-ab10-bde455e7ecc9-registration-dir\") pod \"csi-node-driver-kjkt6\" (UID: \"68445617-ec60-49e4-ab10-bde455e7ecc9\") " pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:39.352000 kubelet[2786]: E1030 00:00:39.351792 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.352000 kubelet[2786]: W1030 00:00:39.351804 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.352000 kubelet[2786]: E1030 00:00:39.351821 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.352000 kubelet[2786]: I1030 00:00:39.351863 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68445617-ec60-49e4-ab10-bde455e7ecc9-kubelet-dir\") pod \"csi-node-driver-kjkt6\" (UID: \"68445617-ec60-49e4-ab10-bde455e7ecc9\") " pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:39.352341 kubelet[2786]: E1030 00:00:39.352080 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.352341 kubelet[2786]: W1030 00:00:39.352105 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.352341 kubelet[2786]: E1030 00:00:39.352139 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.352341 kubelet[2786]: I1030 00:00:39.352159 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78f9d\" (UniqueName: \"kubernetes.io/projected/68445617-ec60-49e4-ab10-bde455e7ecc9-kube-api-access-78f9d\") pod \"csi-node-driver-kjkt6\" (UID: \"68445617-ec60-49e4-ab10-bde455e7ecc9\") " pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:39.352474 kubelet[2786]: E1030 00:00:39.352417 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.352474 kubelet[2786]: W1030 00:00:39.352439 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.352474 kubelet[2786]: E1030 00:00:39.352473 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.352587 kubelet[2786]: I1030 00:00:39.352493 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68445617-ec60-49e4-ab10-bde455e7ecc9-socket-dir\") pod \"csi-node-driver-kjkt6\" (UID: \"68445617-ec60-49e4-ab10-bde455e7ecc9\") " pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:39.354117 kubelet[2786]: E1030 00:00:39.352745 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354117 kubelet[2786]: W1030 00:00:39.352766 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354117 kubelet[2786]: E1030 00:00:39.352782 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354117 kubelet[2786]: I1030 00:00:39.352798 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68445617-ec60-49e4-ab10-bde455e7ecc9-varrun\") pod \"csi-node-driver-kjkt6\" (UID: \"68445617-ec60-49e4-ab10-bde455e7ecc9\") " pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:39.354117 kubelet[2786]: E1030 00:00:39.353106 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354117 kubelet[2786]: W1030 00:00:39.353120 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354117 kubelet[2786]: E1030 00:00:39.353136 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354117 kubelet[2786]: E1030 00:00:39.353352 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354117 kubelet[2786]: W1030 00:00:39.353361 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.353402 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.353598 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354361 kubelet[2786]: W1030 00:00:39.353607 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.353631 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.353835 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354361 kubelet[2786]: W1030 00:00:39.353843 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.353863 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.354033 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354361 kubelet[2786]: W1030 00:00:39.354042 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354361 kubelet[2786]: E1030 00:00:39.354061 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354567 kubelet[2786]: E1030 00:00:39.354351 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354567 kubelet[2786]: W1030 00:00:39.354363 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354567 kubelet[2786]: E1030 00:00:39.354391 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354633 kubelet[2786]: E1030 00:00:39.354620 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354633 kubelet[2786]: W1030 00:00:39.354631 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354677 kubelet[2786]: E1030 00:00:39.354643 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.354899 kubelet[2786]: E1030 00:00:39.354874 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.354899 kubelet[2786]: W1030 00:00:39.354892 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.354961 kubelet[2786]: E1030 00:00:39.354904 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.355162 kubelet[2786]: E1030 00:00:39.355137 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.355162 kubelet[2786]: W1030 00:00:39.355156 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.355232 kubelet[2786]: E1030 00:00:39.355168 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.355534 kubelet[2786]: E1030 00:00:39.355493 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.355593 kubelet[2786]: W1030 00:00:39.355557 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.355593 kubelet[2786]: E1030 00:00:39.355571 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.359594 kubelet[2786]: E1030 00:00:39.359547 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:39.361615 containerd[1621]: time="2025-10-30T00:00:39.361566361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-877q2,Uid:4e96c40c-cb26-49f0-ab88-8061cb3d38a4,Namespace:calico-system,Attempt:0,}" Oct 30 00:00:39.392382 containerd[1621]: time="2025-10-30T00:00:39.392309742Z" level=info msg="connecting to shim 22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709" address="unix:///run/containerd/s/4e5072a8f942bfa400cb66676444ace2c6df32edea36f6f1e45aebdf12e1b989" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:00:39.420395 systemd[1]: Started cri-containerd-22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709.scope - libcontainer container 22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709. Oct 30 00:00:39.448866 containerd[1621]: time="2025-10-30T00:00:39.448802630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-877q2,Uid:4e96c40c-cb26-49f0-ab88-8061cb3d38a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\"" Oct 30 00:00:39.449699 kubelet[2786]: E1030 00:00:39.449672 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:39.453831 kubelet[2786]: E1030 00:00:39.453807 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.453831 kubelet[2786]: W1030 00:00:39.453828 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.453898 kubelet[2786]: E1030 00:00:39.453853 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.454175 kubelet[2786]: E1030 00:00:39.454157 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.454175 kubelet[2786]: W1030 00:00:39.454171 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.454246 kubelet[2786]: E1030 00:00:39.454183 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.454493 kubelet[2786]: E1030 00:00:39.454462 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.454493 kubelet[2786]: W1030 00:00:39.454479 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.454493 kubelet[2786]: E1030 00:00:39.454499 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.454747 kubelet[2786]: E1030 00:00:39.454732 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.454747 kubelet[2786]: W1030 00:00:39.454745 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.454822 kubelet[2786]: E1030 00:00:39.454765 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.455017 kubelet[2786]: E1030 00:00:39.454999 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.455041 kubelet[2786]: W1030 00:00:39.455016 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.455063 kubelet[2786]: E1030 00:00:39.455054 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.455436 kubelet[2786]: E1030 00:00:39.455416 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.455436 kubelet[2786]: W1030 00:00:39.455427 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.455576 kubelet[2786]: E1030 00:00:39.455561 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.455765 kubelet[2786]: E1030 00:00:39.455753 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.455765 kubelet[2786]: W1030 00:00:39.455763 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.455807 kubelet[2786]: E1030 00:00:39.455778 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.456085 kubelet[2786]: E1030 00:00:39.456071 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.456085 kubelet[2786]: W1030 00:00:39.456081 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.456225 kubelet[2786]: E1030 00:00:39.456134 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.456331 kubelet[2786]: E1030 00:00:39.456317 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.456331 kubelet[2786]: W1030 00:00:39.456328 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.456386 kubelet[2786]: E1030 00:00:39.456364 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.456548 kubelet[2786]: E1030 00:00:39.456517 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.456548 kubelet[2786]: W1030 00:00:39.456527 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.456609 kubelet[2786]: E1030 00:00:39.456550 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.456779 kubelet[2786]: E1030 00:00:39.456767 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.456779 kubelet[2786]: W1030 00:00:39.456777 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.456839 kubelet[2786]: E1030 00:00:39.456815 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.457062 kubelet[2786]: E1030 00:00:39.457047 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.457062 kubelet[2786]: W1030 00:00:39.457058 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.457133 kubelet[2786]: E1030 00:00:39.457072 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.457406 kubelet[2786]: E1030 00:00:39.457387 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.457406 kubelet[2786]: W1030 00:00:39.457401 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.457475 kubelet[2786]: E1030 00:00:39.457420 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.457621 kubelet[2786]: E1030 00:00:39.457603 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.457621 kubelet[2786]: W1030 00:00:39.457613 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.457690 kubelet[2786]: E1030 00:00:39.457627 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.457802 kubelet[2786]: E1030 00:00:39.457789 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.457802 kubelet[2786]: W1030 00:00:39.457798 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.457847 kubelet[2786]: E1030 00:00:39.457822 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.457994 kubelet[2786]: E1030 00:00:39.457980 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.457994 kubelet[2786]: W1030 00:00:39.457991 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.458045 kubelet[2786]: E1030 00:00:39.458019 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.458216 kubelet[2786]: E1030 00:00:39.458201 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.458216 kubelet[2786]: W1030 00:00:39.458213 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.458266 kubelet[2786]: E1030 00:00:39.458226 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.458427 kubelet[2786]: E1030 00:00:39.458415 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.458427 kubelet[2786]: W1030 00:00:39.458424 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.458487 kubelet[2786]: E1030 00:00:39.458436 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.458613 kubelet[2786]: E1030 00:00:39.458600 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.458613 kubelet[2786]: W1030 00:00:39.458610 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.458662 kubelet[2786]: E1030 00:00:39.458641 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.458887 kubelet[2786]: E1030 00:00:39.458874 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.458887 kubelet[2786]: W1030 00:00:39.458884 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.458940 kubelet[2786]: E1030 00:00:39.458898 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.459132 kubelet[2786]: E1030 00:00:39.459119 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.459132 kubelet[2786]: W1030 00:00:39.459128 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.459196 kubelet[2786]: E1030 00:00:39.459162 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.459319 kubelet[2786]: E1030 00:00:39.459306 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.459319 kubelet[2786]: W1030 00:00:39.459316 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.459366 kubelet[2786]: E1030 00:00:39.459343 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.459485 kubelet[2786]: E1030 00:00:39.459472 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.459485 kubelet[2786]: W1030 00:00:39.459481 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.459546 kubelet[2786]: E1030 00:00:39.459493 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.459723 kubelet[2786]: E1030 00:00:39.459708 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.459723 kubelet[2786]: W1030 00:00:39.459720 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.459771 kubelet[2786]: E1030 00:00:39.459735 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.459928 kubelet[2786]: E1030 00:00:39.459915 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.459928 kubelet[2786]: W1030 00:00:39.459926 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.459970 kubelet[2786]: E1030 00:00:39.459935 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:39.468242 kubelet[2786]: E1030 00:00:39.468220 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:39.468242 kubelet[2786]: W1030 00:00:39.468236 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:39.468342 kubelet[2786]: E1030 00:00:39.468250 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:40.617432 kubelet[2786]: E1030 00:00:40.617367 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:42.617352 kubelet[2786]: E1030 00:00:42.617279 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:44.617569 kubelet[2786]: E1030 00:00:44.617472 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:45.480052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854977097.mount: Deactivated successfully. Oct 30 00:00:46.617645 kubelet[2786]: E1030 00:00:46.617564 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:46.996009 containerd[1621]: time="2025-10-30T00:00:46.995939803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:46.997321 containerd[1621]: time="2025-10-30T00:00:46.997278257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:00:46.998800 containerd[1621]: time="2025-10-30T00:00:46.998728428Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:47.001001 containerd[1621]: time="2025-10-30T00:00:47.000922117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:47.001574 containerd[1621]: time="2025-10-30T00:00:47.001517283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 7.671952588s" Oct 30 00:00:47.001619 containerd[1621]: time="2025-10-30T00:00:47.001573276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:00:47.002958 containerd[1621]: time="2025-10-30T00:00:47.002928734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:00:47.011926 containerd[1621]: time="2025-10-30T00:00:47.011868785Z" level=info msg="CreateContainer within sandbox \"c077f9f0f0769c8b9a3d95e6cec93cf31b674c03ef231e07637c54687dd6cd10\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:00:47.022949 containerd[1621]: time="2025-10-30T00:00:47.022889611Z" level=info msg="Container ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:47.031350 containerd[1621]: time="2025-10-30T00:00:47.031276233Z" level=info msg="CreateContainer within sandbox \"c077f9f0f0769c8b9a3d95e6cec93cf31b674c03ef231e07637c54687dd6cd10\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa\"" Oct 30 00:00:47.031843 containerd[1621]: time="2025-10-30T00:00:47.031804767Z" level=info msg="StartContainer for \"ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa\"" Oct 30 00:00:47.033227 containerd[1621]: time="2025-10-30T00:00:47.033197212Z" level=info msg="connecting to shim ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa" address="unix:///run/containerd/s/a62124f30a9de6ac87444ccb32389d92cc26bb169285f030fb67766815b1d7aa" protocol=ttrpc version=3 Oct 30 00:00:47.062315 systemd[1]: Started cri-containerd-ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa.scope - libcontainer container ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa. Oct 30 00:00:47.142681 containerd[1621]: time="2025-10-30T00:00:47.142633302Z" level=info msg="StartContainer for \"ba39359cb8f81ae5c22e58ec0d435d10cc616352648536e689e6c7bbc9d271fa\" returns successfully" Oct 30 00:00:47.693597 kubelet[2786]: E1030 00:00:47.693530 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.693597 kubelet[2786]: W1030 00:00:47.693560 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.693597 kubelet[2786]: E1030 00:00:47.693583 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.694414 kubelet[2786]: E1030 00:00:47.693884 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.694414 kubelet[2786]: W1030 00:00:47.693903 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.694414 kubelet[2786]: E1030 00:00:47.693926 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.694414 kubelet[2786]: E1030 00:00:47.694246 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.694414 kubelet[2786]: W1030 00:00:47.694257 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.694414 kubelet[2786]: E1030 00:00:47.694268 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.694696 kubelet[2786]: E1030 00:00:47.694493 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.694696 kubelet[2786]: W1030 00:00:47.694502 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.694696 kubelet[2786]: E1030 00:00:47.694511 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.694844 kubelet[2786]: E1030 00:00:47.694732 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.694844 kubelet[2786]: W1030 00:00:47.694795 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.694844 kubelet[2786]: E1030 00:00:47.694807 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.695058 kubelet[2786]: E1030 00:00:47.695035 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.695058 kubelet[2786]: W1030 00:00:47.695050 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.695204 kubelet[2786]: E1030 00:00:47.695066 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.695370 kubelet[2786]: E1030 00:00:47.695327 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.695370 kubelet[2786]: W1030 00:00:47.695341 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.695370 kubelet[2786]: E1030 00:00:47.695352 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.695593 kubelet[2786]: E1030 00:00:47.695532 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.695593 kubelet[2786]: W1030 00:00:47.695542 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.695593 kubelet[2786]: E1030 00:00:47.695552 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.695936 kubelet[2786]: E1030 00:00:47.695704 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:47.696052 kubelet[2786]: E1030 00:00:47.695946 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.696052 kubelet[2786]: W1030 00:00:47.695961 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.696052 kubelet[2786]: E1030 00:00:47.696001 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.696363 kubelet[2786]: E1030 00:00:47.696293 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.696363 kubelet[2786]: W1030 00:00:47.696303 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.696363 kubelet[2786]: E1030 00:00:47.696313 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.696597 kubelet[2786]: E1030 00:00:47.696500 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.696597 kubelet[2786]: W1030 00:00:47.696508 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.696597 kubelet[2786]: E1030 00:00:47.696517 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.696840 kubelet[2786]: E1030 00:00:47.696674 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.696840 kubelet[2786]: W1030 00:00:47.696682 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.696840 kubelet[2786]: E1030 00:00:47.696690 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.696840 kubelet[2786]: E1030 00:00:47.696844 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.697074 kubelet[2786]: W1030 00:00:47.696852 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.697074 kubelet[2786]: E1030 00:00:47.696860 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.697074 kubelet[2786]: E1030 00:00:47.697053 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.697074 kubelet[2786]: W1030 00:00:47.697060 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.697074 kubelet[2786]: E1030 00:00:47.697068 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.697345 kubelet[2786]: E1030 00:00:47.697248 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.697345 kubelet[2786]: W1030 00:00:47.697256 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.697345 kubelet[2786]: E1030 00:00:47.697264 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.706033 kubelet[2786]: I1030 00:00:47.705904 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-789c84df59-tl6dz" podStartSLOduration=2.032452109 podStartE2EDuration="9.705882825s" podCreationTimestamp="2025-10-30 00:00:38 +0000 UTC" firstStartedPulling="2025-10-30 00:00:39.32902693 +0000 UTC m=+21.800490473" lastFinishedPulling="2025-10-30 00:00:47.002457646 +0000 UTC m=+29.473921189" observedRunningTime="2025-10-30 00:00:47.705882785 +0000 UTC m=+30.177346338" watchObservedRunningTime="2025-10-30 00:00:47.705882825 +0000 UTC m=+30.177346368" Oct 30 00:00:47.713341 kubelet[2786]: E1030 00:00:47.713312 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.713341 kubelet[2786]: W1030 00:00:47.713331 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.713341 kubelet[2786]: E1030 00:00:47.713350 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.713706 kubelet[2786]: E1030 00:00:47.713654 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.713706 kubelet[2786]: W1030 00:00:47.713691 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.713781 kubelet[2786]: E1030 00:00:47.713729 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.714182 kubelet[2786]: E1030 00:00:47.714147 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.714182 kubelet[2786]: W1030 00:00:47.714176 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.714275 kubelet[2786]: E1030 00:00:47.714205 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.714459 kubelet[2786]: E1030 00:00:47.714424 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.714459 kubelet[2786]: W1030 00:00:47.714440 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.714459 kubelet[2786]: E1030 00:00:47.714463 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.714720 kubelet[2786]: E1030 00:00:47.714680 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.714720 kubelet[2786]: W1030 00:00:47.714690 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.714720 kubelet[2786]: E1030 00:00:47.714703 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.714916 kubelet[2786]: E1030 00:00:47.714899 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.714916 kubelet[2786]: W1030 00:00:47.714910 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.715005 kubelet[2786]: E1030 00:00:47.714924 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.715190 kubelet[2786]: E1030 00:00:47.715169 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.715190 kubelet[2786]: W1030 00:00:47.715184 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.715190 kubelet[2786]: E1030 00:00:47.715196 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.715375 kubelet[2786]: E1030 00:00:47.715354 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.715375 kubelet[2786]: W1030 00:00:47.715376 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.715447 kubelet[2786]: E1030 00:00:47.715391 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.715595 kubelet[2786]: E1030 00:00:47.715574 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.715595 kubelet[2786]: W1030 00:00:47.715589 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.715673 kubelet[2786]: E1030 00:00:47.715606 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.715849 kubelet[2786]: E1030 00:00:47.715826 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.715849 kubelet[2786]: W1030 00:00:47.715843 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.715921 kubelet[2786]: E1030 00:00:47.715862 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.716133 kubelet[2786]: E1030 00:00:47.716092 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.716133 kubelet[2786]: W1030 00:00:47.716125 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.716226 kubelet[2786]: E1030 00:00:47.716140 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.716326 kubelet[2786]: E1030 00:00:47.716307 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.716326 kubelet[2786]: W1030 00:00:47.716320 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.716406 kubelet[2786]: E1030 00:00:47.716332 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.716505 kubelet[2786]: E1030 00:00:47.716488 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.716505 kubelet[2786]: W1030 00:00:47.716499 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.716582 kubelet[2786]: E1030 00:00:47.716512 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.716712 kubelet[2786]: E1030 00:00:47.716694 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.716712 kubelet[2786]: W1030 00:00:47.716705 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.716785 kubelet[2786]: E1030 00:00:47.716717 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.716942 kubelet[2786]: E1030 00:00:47.716921 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.716942 kubelet[2786]: W1030 00:00:47.716937 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.717171 kubelet[2786]: E1030 00:00:47.716954 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.717231 kubelet[2786]: E1030 00:00:47.717211 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.717231 kubelet[2786]: W1030 00:00:47.717224 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.717307 kubelet[2786]: E1030 00:00:47.717244 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.717783 kubelet[2786]: E1030 00:00:47.717752 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.717783 kubelet[2786]: W1030 00:00:47.717768 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.717873 kubelet[2786]: E1030 00:00:47.717786 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:47.718008 kubelet[2786]: E1030 00:00:47.717980 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:47.718008 kubelet[2786]: W1030 00:00:47.718001 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:47.718080 kubelet[2786]: E1030 00:00:47.718011 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.617615 kubelet[2786]: E1030 00:00:48.617508 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:48.694651 kubelet[2786]: I1030 00:00:48.694601 2786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:00:48.695186 kubelet[2786]: E1030 00:00:48.695060 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:48.704317 kubelet[2786]: E1030 00:00:48.704268 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.704317 kubelet[2786]: W1030 00:00:48.704294 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.704317 kubelet[2786]: E1030 00:00:48.704317 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.704578 kubelet[2786]: E1030 00:00:48.704559 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.704578 kubelet[2786]: W1030 00:00:48.704573 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.704656 kubelet[2786]: E1030 00:00:48.704584 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.704811 kubelet[2786]: E1030 00:00:48.704793 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.704811 kubelet[2786]: W1030 00:00:48.704805 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.704878 kubelet[2786]: E1030 00:00:48.704815 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.705122 kubelet[2786]: E1030 00:00:48.705084 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.705181 kubelet[2786]: W1030 00:00:48.705125 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.705181 kubelet[2786]: E1030 00:00:48.705137 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.705358 kubelet[2786]: E1030 00:00:48.705342 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.705358 kubelet[2786]: W1030 00:00:48.705354 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.705431 kubelet[2786]: E1030 00:00:48.705365 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.705557 kubelet[2786]: E1030 00:00:48.705543 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.705557 kubelet[2786]: W1030 00:00:48.705554 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.705628 kubelet[2786]: E1030 00:00:48.705564 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.705769 kubelet[2786]: E1030 00:00:48.705743 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.705769 kubelet[2786]: W1030 00:00:48.705758 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.705841 kubelet[2786]: E1030 00:00:48.705770 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.705998 kubelet[2786]: E1030 00:00:48.705982 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.705998 kubelet[2786]: W1030 00:00:48.705995 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.706066 kubelet[2786]: E1030 00:00:48.706005 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.706281 kubelet[2786]: E1030 00:00:48.706265 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.706281 kubelet[2786]: W1030 00:00:48.706278 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.706352 kubelet[2786]: E1030 00:00:48.706289 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.706490 kubelet[2786]: E1030 00:00:48.706475 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.706490 kubelet[2786]: W1030 00:00:48.706487 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.706563 kubelet[2786]: E1030 00:00:48.706498 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.706699 kubelet[2786]: E1030 00:00:48.706683 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.706699 kubelet[2786]: W1030 00:00:48.706697 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.706763 kubelet[2786]: E1030 00:00:48.706707 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.706902 kubelet[2786]: E1030 00:00:48.706887 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.706902 kubelet[2786]: W1030 00:00:48.706899 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.706993 kubelet[2786]: E1030 00:00:48.706909 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.707159 kubelet[2786]: E1030 00:00:48.707143 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.707159 kubelet[2786]: W1030 00:00:48.707156 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.707232 kubelet[2786]: E1030 00:00:48.707167 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.707376 kubelet[2786]: E1030 00:00:48.707360 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.707376 kubelet[2786]: W1030 00:00:48.707373 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.707454 kubelet[2786]: E1030 00:00:48.707383 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.707585 kubelet[2786]: E1030 00:00:48.707570 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.707585 kubelet[2786]: W1030 00:00:48.707582 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.707662 kubelet[2786]: E1030 00:00:48.707592 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.723022 kubelet[2786]: E1030 00:00:48.722968 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.723022 kubelet[2786]: W1030 00:00:48.722991 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.723022 kubelet[2786]: E1030 00:00:48.723010 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.723332 kubelet[2786]: E1030 00:00:48.723272 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.723332 kubelet[2786]: W1030 00:00:48.723297 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.723332 kubelet[2786]: E1030 00:00:48.723329 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.723689 kubelet[2786]: E1030 00:00:48.723662 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.723689 kubelet[2786]: W1030 00:00:48.723675 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.723689 kubelet[2786]: E1030 00:00:48.723692 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.724050 kubelet[2786]: E1030 00:00:48.724021 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.724112 kubelet[2786]: W1030 00:00:48.724049 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.724112 kubelet[2786]: E1030 00:00:48.724076 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.724299 kubelet[2786]: E1030 00:00:48.724274 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.724299 kubelet[2786]: W1030 00:00:48.724287 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.724377 kubelet[2786]: E1030 00:00:48.724302 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.724504 kubelet[2786]: E1030 00:00:48.724487 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.724504 kubelet[2786]: W1030 00:00:48.724498 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.724571 kubelet[2786]: E1030 00:00:48.724511 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.724706 kubelet[2786]: E1030 00:00:48.724690 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.724706 kubelet[2786]: W1030 00:00:48.724700 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.724777 kubelet[2786]: E1030 00:00:48.724717 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.724950 kubelet[2786]: E1030 00:00:48.724934 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.724950 kubelet[2786]: W1030 00:00:48.724944 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.725026 kubelet[2786]: E1030 00:00:48.724976 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.725151 kubelet[2786]: E1030 00:00:48.725135 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.725151 kubelet[2786]: W1030 00:00:48.725146 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.725247 kubelet[2786]: E1030 00:00:48.725194 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.725331 kubelet[2786]: E1030 00:00:48.725315 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.725331 kubelet[2786]: W1030 00:00:48.725325 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.725405 kubelet[2786]: E1030 00:00:48.725342 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.725528 kubelet[2786]: E1030 00:00:48.725512 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.725528 kubelet[2786]: W1030 00:00:48.725522 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.725608 kubelet[2786]: E1030 00:00:48.725535 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.725723 kubelet[2786]: E1030 00:00:48.725704 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.725723 kubelet[2786]: W1030 00:00:48.725717 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.725794 kubelet[2786]: E1030 00:00:48.725732 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.725912 kubelet[2786]: E1030 00:00:48.725896 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.725912 kubelet[2786]: W1030 00:00:48.725906 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.725997 kubelet[2786]: E1030 00:00:48.725919 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.726191 kubelet[2786]: E1030 00:00:48.726175 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.726191 kubelet[2786]: W1030 00:00:48.726186 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.726261 kubelet[2786]: E1030 00:00:48.726198 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.726411 kubelet[2786]: E1030 00:00:48.726394 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.726411 kubelet[2786]: W1030 00:00:48.726404 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.726488 kubelet[2786]: E1030 00:00:48.726417 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.726698 kubelet[2786]: E1030 00:00:48.726679 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.726698 kubelet[2786]: W1030 00:00:48.726692 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.726758 kubelet[2786]: E1030 00:00:48.726701 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.726884 kubelet[2786]: E1030 00:00:48.726867 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.726884 kubelet[2786]: W1030 00:00:48.726878 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.726963 kubelet[2786]: E1030 00:00:48.726888 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:48.727091 kubelet[2786]: E1030 00:00:48.727074 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:00:48.727091 kubelet[2786]: W1030 00:00:48.727086 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:00:48.727091 kubelet[2786]: E1030 00:00:48.727116 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:00:50.103289 containerd[1621]: time="2025-10-30T00:00:50.103214656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:50.104376 containerd[1621]: time="2025-10-30T00:00:50.104334142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:00:50.105961 containerd[1621]: time="2025-10-30T00:00:50.105895603Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:50.108815 containerd[1621]: time="2025-10-30T00:00:50.108747266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:50.109412 containerd[1621]: time="2025-10-30T00:00:50.109361088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 3.106404624s" Oct 30 00:00:50.109412 containerd[1621]: time="2025-10-30T00:00:50.109401112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:00:50.111745 containerd[1621]: time="2025-10-30T00:00:50.111681711Z" level=info msg="CreateContainer within sandbox \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:00:50.122673 containerd[1621]: time="2025-10-30T00:00:50.122614076Z" level=info msg="Container ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:50.132301 containerd[1621]: time="2025-10-30T00:00:50.132233899Z" level=info msg="CreateContainer within sandbox \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\"" Oct 30 00:00:50.135118 containerd[1621]: time="2025-10-30T00:00:50.133210300Z" level=info msg="StartContainer for \"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\"" Oct 30 00:00:50.135118 containerd[1621]: time="2025-10-30T00:00:50.134962715Z" level=info msg="connecting to shim ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a" address="unix:///run/containerd/s/4e5072a8f942bfa400cb66676444ace2c6df32edea36f6f1e45aebdf12e1b989" protocol=ttrpc version=3 Oct 30 00:00:50.162430 systemd[1]: Started cri-containerd-ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a.scope - libcontainer container ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a. Oct 30 00:00:50.227796 systemd[1]: cri-containerd-ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a.scope: Deactivated successfully. Oct 30 00:00:50.231599 containerd[1621]: time="2025-10-30T00:00:50.231548928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\" id:\"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\" pid:3516 exited_at:{seconds:1761782450 nanos:230922292}" Oct 30 00:00:50.285734 containerd[1621]: time="2025-10-30T00:00:50.285648555Z" level=info msg="received exit event container_id:\"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\" id:\"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\" pid:3516 exited_at:{seconds:1761782450 nanos:230922292}" Oct 30 00:00:50.299643 containerd[1621]: time="2025-10-30T00:00:50.299584974Z" level=info msg="StartContainer for \"ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a\" returns successfully" Oct 30 00:00:50.316350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecef0d9f3175ab69ecb0bf0cf6c8b0c7861e10f01d1057de3bfc5e9ed1c8847a-rootfs.mount: Deactivated successfully. Oct 30 00:00:50.617375 kubelet[2786]: E1030 00:00:50.617315 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:50.702998 kubelet[2786]: E1030 00:00:50.702115 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:51.706086 kubelet[2786]: E1030 00:00:51.706050 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:51.707002 containerd[1621]: time="2025-10-30T00:00:51.706952435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:00:52.617828 kubelet[2786]: E1030 00:00:52.617731 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:54.618297 kubelet[2786]: E1030 00:00:54.617700 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:55.054637 containerd[1621]: time="2025-10-30T00:00:55.054538227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:55.055363 containerd[1621]: time="2025-10-30T00:00:55.055299356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:00:55.056593 containerd[1621]: time="2025-10-30T00:00:55.056487964Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:55.058584 containerd[1621]: time="2025-10-30T00:00:55.058526926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:00:55.059366 containerd[1621]: time="2025-10-30T00:00:55.059335903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.352338475s" Oct 30 00:00:55.059448 containerd[1621]: time="2025-10-30T00:00:55.059368673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:00:55.061970 containerd[1621]: time="2025-10-30T00:00:55.061932375Z" level=info msg="CreateContainer within sandbox \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:00:55.077037 containerd[1621]: time="2025-10-30T00:00:55.076959473Z" level=info msg="Container b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:00:55.087227 containerd[1621]: time="2025-10-30T00:00:55.087166014Z" level=info msg="CreateContainer within sandbox \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\"" Oct 30 00:00:55.087920 containerd[1621]: time="2025-10-30T00:00:55.087878260Z" level=info msg="StartContainer for \"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\"" Oct 30 00:00:55.089343 containerd[1621]: time="2025-10-30T00:00:55.089310881Z" level=info msg="connecting to shim b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6" address="unix:///run/containerd/s/4e5072a8f942bfa400cb66676444ace2c6df32edea36f6f1e45aebdf12e1b989" protocol=ttrpc version=3 Oct 30 00:00:55.116343 systemd[1]: Started cri-containerd-b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6.scope - libcontainer container b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6. Oct 30 00:00:55.160938 containerd[1621]: time="2025-10-30T00:00:55.160902001Z" level=info msg="StartContainer for \"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\" returns successfully" Oct 30 00:00:55.717627 kubelet[2786]: E1030 00:00:55.717559 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:56.617845 kubelet[2786]: E1030 00:00:56.617718 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:00:56.719327 kubelet[2786]: E1030 00:00:56.719281 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:56.857210 systemd[1]: cri-containerd-b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6.scope: Deactivated successfully. Oct 30 00:00:56.857694 systemd[1]: cri-containerd-b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6.scope: Consumed 619ms CPU time, 175.2M memory peak, 3.1M read from disk, 171.3M written to disk. Oct 30 00:00:56.860610 containerd[1621]: time="2025-10-30T00:00:56.860557982Z" level=info msg="received exit event container_id:\"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\" id:\"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\" pid:3577 exited_at:{seconds:1761782456 nanos:860247728}" Oct 30 00:00:56.861032 containerd[1621]: time="2025-10-30T00:00:56.860697321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\" id:\"b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6\" pid:3577 exited_at:{seconds:1761782456 nanos:860247728}" Oct 30 00:00:56.892529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b85da30eb84d840a283da17f08138b3826c76a2161e55d5926d3d347057dabb6-rootfs.mount: Deactivated successfully. Oct 30 00:00:56.918251 kubelet[2786]: I1030 00:00:56.918201 2786 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 00:00:57.148839 systemd[1]: Created slice kubepods-burstable-pod36714a96_4960_46bd_99ac_7641ec2c1cb1.slice - libcontainer container kubepods-burstable-pod36714a96_4960_46bd_99ac_7641ec2c1cb1.slice. Oct 30 00:00:57.172346 systemd[1]: Created slice kubepods-besteffort-pod1f252106_e865_42bc_bcfa_ce876455a870.slice - libcontainer container kubepods-besteffort-pod1f252106_e865_42bc_bcfa_ce876455a870.slice. Oct 30 00:00:57.177966 systemd[1]: Created slice kubepods-besteffort-pod5d1e1dc8_5310_4db6_99a1_ad75bb29c5fa.slice - libcontainer container kubepods-besteffort-pod5d1e1dc8_5310_4db6_99a1_ad75bb29c5fa.slice. Oct 30 00:00:57.183538 systemd[1]: Created slice kubepods-besteffort-pod0bb0dc31_1db0_483e_b0fa_e4d89369c901.slice - libcontainer container kubepods-besteffort-pod0bb0dc31_1db0_483e_b0fa_e4d89369c901.slice. Oct 30 00:00:57.184325 kubelet[2786]: I1030 00:00:57.183671 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwt78\" (UniqueName: \"kubernetes.io/projected/1f252106-e865-42bc-bcfa-ce876455a870-kube-api-access-pwt78\") pod \"calico-kube-controllers-75c88c4ddc-bshzv\" (UID: \"1f252106-e865-42bc-bcfa-ce876455a870\") " pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" Oct 30 00:00:57.184325 kubelet[2786]: I1030 00:00:57.183709 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-ca-bundle\") pod \"whisker-58ffc9d69c-lqh8p\" (UID: \"46c4714f-b151-44c7-998c-22f0b492d68d\") " pod="calico-system/whisker-58ffc9d69c-lqh8p" Oct 30 00:00:57.184325 kubelet[2786]: I1030 00:00:57.183743 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68bb771e-4dde-43d3-80f7-8e8958576aed-goldmane-ca-bundle\") pod \"goldmane-666569f655-qgxmh\" (UID: \"68bb771e-4dde-43d3-80f7-8e8958576aed\") " pod="calico-system/goldmane-666569f655-qgxmh" Oct 30 00:00:57.184325 kubelet[2786]: I1030 00:00:57.183761 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p742\" (UniqueName: \"kubernetes.io/projected/e179f99f-26b2-4c6c-96ed-bff21a0c48d7-kube-api-access-2p742\") pod \"coredns-668d6bf9bc-shcqz\" (UID: \"e179f99f-26b2-4c6c-96ed-bff21a0c48d7\") " pod="kube-system/coredns-668d6bf9bc-shcqz" Oct 30 00:00:57.184325 kubelet[2786]: I1030 00:00:57.183781 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bb0dc31-1db0-483e-b0fa-e4d89369c901-calico-apiserver-certs\") pod \"calico-apiserver-565d8bbfcd-6vcm7\" (UID: \"0bb0dc31-1db0-483e-b0fa-e4d89369c901\") " pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" Oct 30 00:00:57.184492 kubelet[2786]: I1030 00:00:57.183802 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2gnx\" (UniqueName: \"kubernetes.io/projected/68bb771e-4dde-43d3-80f7-8e8958576aed-kube-api-access-v2gnx\") pod \"goldmane-666569f655-qgxmh\" (UID: \"68bb771e-4dde-43d3-80f7-8e8958576aed\") " pod="calico-system/goldmane-666569f655-qgxmh" Oct 30 00:00:57.184492 kubelet[2786]: I1030 00:00:57.183823 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa-calico-apiserver-certs\") pod \"calico-apiserver-565d8bbfcd-6h8nd\" (UID: \"5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa\") " pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" Oct 30 00:00:57.184492 kubelet[2786]: I1030 00:00:57.183842 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff2lv\" (UniqueName: \"kubernetes.io/projected/5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa-kube-api-access-ff2lv\") pod \"calico-apiserver-565d8bbfcd-6h8nd\" (UID: \"5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa\") " pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" Oct 30 00:00:57.184492 kubelet[2786]: I1030 00:00:57.183860 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e179f99f-26b2-4c6c-96ed-bff21a0c48d7-config-volume\") pod \"coredns-668d6bf9bc-shcqz\" (UID: \"e179f99f-26b2-4c6c-96ed-bff21a0c48d7\") " pod="kube-system/coredns-668d6bf9bc-shcqz" Oct 30 00:00:57.184492 kubelet[2786]: I1030 00:00:57.183879 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68bb771e-4dde-43d3-80f7-8e8958576aed-config\") pod \"goldmane-666569f655-qgxmh\" (UID: \"68bb771e-4dde-43d3-80f7-8e8958576aed\") " pod="calico-system/goldmane-666569f655-qgxmh" Oct 30 00:00:57.184618 kubelet[2786]: I1030 00:00:57.183897 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36714a96-4960-46bd-99ac-7641ec2c1cb1-config-volume\") pod \"coredns-668d6bf9bc-595np\" (UID: \"36714a96-4960-46bd-99ac-7641ec2c1cb1\") " pod="kube-system/coredns-668d6bf9bc-595np" Oct 30 00:00:57.184618 kubelet[2786]: I1030 00:00:57.183917 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-backend-key-pair\") pod \"whisker-58ffc9d69c-lqh8p\" (UID: \"46c4714f-b151-44c7-998c-22f0b492d68d\") " pod="calico-system/whisker-58ffc9d69c-lqh8p" Oct 30 00:00:57.184618 kubelet[2786]: I1030 00:00:57.183934 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rwj\" (UniqueName: \"kubernetes.io/projected/46c4714f-b151-44c7-998c-22f0b492d68d-kube-api-access-n9rwj\") pod \"whisker-58ffc9d69c-lqh8p\" (UID: \"46c4714f-b151-44c7-998c-22f0b492d68d\") " pod="calico-system/whisker-58ffc9d69c-lqh8p" Oct 30 00:00:57.184618 kubelet[2786]: I1030 00:00:57.183952 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/68bb771e-4dde-43d3-80f7-8e8958576aed-goldmane-key-pair\") pod \"goldmane-666569f655-qgxmh\" (UID: \"68bb771e-4dde-43d3-80f7-8e8958576aed\") " pod="calico-system/goldmane-666569f655-qgxmh" Oct 30 00:00:57.184618 kubelet[2786]: I1030 00:00:57.183973 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f252106-e865-42bc-bcfa-ce876455a870-tigera-ca-bundle\") pod \"calico-kube-controllers-75c88c4ddc-bshzv\" (UID: \"1f252106-e865-42bc-bcfa-ce876455a870\") " pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" Oct 30 00:00:57.184733 kubelet[2786]: I1030 00:00:57.183997 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhffl\" (UniqueName: \"kubernetes.io/projected/0bb0dc31-1db0-483e-b0fa-e4d89369c901-kube-api-access-xhffl\") pod \"calico-apiserver-565d8bbfcd-6vcm7\" (UID: \"0bb0dc31-1db0-483e-b0fa-e4d89369c901\") " pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" Oct 30 00:00:57.184733 kubelet[2786]: I1030 00:00:57.184016 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpjh\" (UniqueName: \"kubernetes.io/projected/36714a96-4960-46bd-99ac-7641ec2c1cb1-kube-api-access-dkpjh\") pod \"coredns-668d6bf9bc-595np\" (UID: \"36714a96-4960-46bd-99ac-7641ec2c1cb1\") " pod="kube-system/coredns-668d6bf9bc-595np" Oct 30 00:00:57.189240 systemd[1]: Created slice kubepods-burstable-pode179f99f_26b2_4c6c_96ed_bff21a0c48d7.slice - libcontainer container kubepods-burstable-pode179f99f_26b2_4c6c_96ed_bff21a0c48d7.slice. Oct 30 00:00:57.194700 systemd[1]: Created slice kubepods-besteffort-pod46c4714f_b151_44c7_998c_22f0b492d68d.slice - libcontainer container kubepods-besteffort-pod46c4714f_b151_44c7_998c_22f0b492d68d.slice. Oct 30 00:00:57.199461 systemd[1]: Created slice kubepods-besteffort-pod68bb771e_4dde_43d3_80f7_8e8958576aed.slice - libcontainer container kubepods-besteffort-pod68bb771e_4dde_43d3_80f7_8e8958576aed.slice. Oct 30 00:00:57.465680 kubelet[2786]: E1030 00:00:57.465619 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:57.466426 containerd[1621]: time="2025-10-30T00:00:57.466381892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-595np,Uid:36714a96-4960-46bd-99ac-7641ec2c1cb1,Namespace:kube-system,Attempt:0,}" Oct 30 00:00:57.475592 containerd[1621]: time="2025-10-30T00:00:57.475537051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c88c4ddc-bshzv,Uid:1f252106-e865-42bc-bcfa-ce876455a870,Namespace:calico-system,Attempt:0,}" Oct 30 00:00:57.492380 kubelet[2786]: E1030 00:00:57.492334 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:57.494061 containerd[1621]: time="2025-10-30T00:00:57.493741044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6h8nd,Uid:5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:00:57.494061 containerd[1621]: time="2025-10-30T00:00:57.493822204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shcqz,Uid:e179f99f-26b2-4c6c-96ed-bff21a0c48d7,Namespace:kube-system,Attempt:0,}" Oct 30 00:00:57.494061 containerd[1621]: time="2025-10-30T00:00:57.493741054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6vcm7,Uid:0bb0dc31-1db0-483e-b0fa-e4d89369c901,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:00:57.499256 containerd[1621]: time="2025-10-30T00:00:57.499011703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58ffc9d69c-lqh8p,Uid:46c4714f-b151-44c7-998c-22f0b492d68d,Namespace:calico-system,Attempt:0,}" Oct 30 00:00:57.502138 containerd[1621]: time="2025-10-30T00:00:57.502078961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qgxmh,Uid:68bb771e-4dde-43d3-80f7-8e8958576aed,Namespace:calico-system,Attempt:0,}" Oct 30 00:00:57.647240 containerd[1621]: time="2025-10-30T00:00:57.647169241Z" level=error msg="Failed to destroy network for sandbox \"1fff5f7c953c860340ca886fb3dbaa04052fd03e8a6f77e1658507f0063839f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.655201 containerd[1621]: time="2025-10-30T00:00:57.654911024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shcqz,Uid:e179f99f-26b2-4c6c-96ed-bff21a0c48d7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fff5f7c953c860340ca886fb3dbaa04052fd03e8a6f77e1658507f0063839f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.666081 kubelet[2786]: E1030 00:00:57.666011 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fff5f7c953c860340ca886fb3dbaa04052fd03e8a6f77e1658507f0063839f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.666294 kubelet[2786]: E1030 00:00:57.666116 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fff5f7c953c860340ca886fb3dbaa04052fd03e8a6f77e1658507f0063839f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-shcqz" Oct 30 00:00:57.666294 kubelet[2786]: E1030 00:00:57.666139 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fff5f7c953c860340ca886fb3dbaa04052fd03e8a6f77e1658507f0063839f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-shcqz" Oct 30 00:00:57.666294 kubelet[2786]: E1030 00:00:57.666194 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-shcqz_kube-system(e179f99f-26b2-4c6c-96ed-bff21a0c48d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-shcqz_kube-system(e179f99f-26b2-4c6c-96ed-bff21a0c48d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fff5f7c953c860340ca886fb3dbaa04052fd03e8a6f77e1658507f0063839f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-shcqz" podUID="e179f99f-26b2-4c6c-96ed-bff21a0c48d7" Oct 30 00:00:57.667268 containerd[1621]: time="2025-10-30T00:00:57.667066739Z" level=error msg="Failed to destroy network for sandbox \"d8c8f1cc74b18b9701f1f938364e1f9f7dfa17851c94070d649134c17295118d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.669336 containerd[1621]: time="2025-10-30T00:00:57.669292830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c88c4ddc-bshzv,Uid:1f252106-e865-42bc-bcfa-ce876455a870,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c8f1cc74b18b9701f1f938364e1f9f7dfa17851c94070d649134c17295118d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.669811 kubelet[2786]: E1030 00:00:57.669774 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c8f1cc74b18b9701f1f938364e1f9f7dfa17851c94070d649134c17295118d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.669874 kubelet[2786]: E1030 00:00:57.669826 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c8f1cc74b18b9701f1f938364e1f9f7dfa17851c94070d649134c17295118d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" Oct 30 00:00:57.669874 kubelet[2786]: E1030 00:00:57.669847 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c8f1cc74b18b9701f1f938364e1f9f7dfa17851c94070d649134c17295118d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" Oct 30 00:00:57.670189 kubelet[2786]: E1030 00:00:57.670076 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8c8f1cc74b18b9701f1f938364e1f9f7dfa17851c94070d649134c17295118d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:00:57.676921 containerd[1621]: time="2025-10-30T00:00:57.676728657Z" level=error msg="Failed to destroy network for sandbox \"30bd5cdad40ab19190ef33e3d3244f667b458324bc7948620c250b43f2ee5c02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.678993 containerd[1621]: time="2025-10-30T00:00:57.678936915Z" level=error msg="Failed to destroy network for sandbox \"ad8991691a7ffb0c22c133cda058af9eb35e2a8206b86f2ce38eb21966c5b198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.680367 containerd[1621]: time="2025-10-30T00:00:57.680053041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6h8nd,Uid:5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30bd5cdad40ab19190ef33e3d3244f667b458324bc7948620c250b43f2ee5c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.681279 kubelet[2786]: E1030 00:00:57.681231 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30bd5cdad40ab19190ef33e3d3244f667b458324bc7948620c250b43f2ee5c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.681471 kubelet[2786]: E1030 00:00:57.681440 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30bd5cdad40ab19190ef33e3d3244f667b458324bc7948620c250b43f2ee5c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" Oct 30 00:00:57.681644 kubelet[2786]: E1030 00:00:57.681568 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30bd5cdad40ab19190ef33e3d3244f667b458324bc7948620c250b43f2ee5c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" Oct 30 00:00:57.681870 containerd[1621]: time="2025-10-30T00:00:57.681595096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-595np,Uid:36714a96-4960-46bd-99ac-7641ec2c1cb1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8991691a7ffb0c22c133cda058af9eb35e2a8206b86f2ce38eb21966c5b198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.681959 kubelet[2786]: E1030 00:00:57.681743 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8991691a7ffb0c22c133cda058af9eb35e2a8206b86f2ce38eb21966c5b198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.681959 kubelet[2786]: E1030 00:00:57.681774 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8991691a7ffb0c22c133cda058af9eb35e2a8206b86f2ce38eb21966c5b198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-595np" Oct 30 00:00:57.681959 kubelet[2786]: E1030 00:00:57.681795 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8991691a7ffb0c22c133cda058af9eb35e2a8206b86f2ce38eb21966c5b198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-595np" Oct 30 00:00:57.683154 kubelet[2786]: E1030 00:00:57.682118 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565d8bbfcd-6h8nd_calico-apiserver(5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565d8bbfcd-6h8nd_calico-apiserver(5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30bd5cdad40ab19190ef33e3d3244f667b458324bc7948620c250b43f2ee5c02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:00:57.683154 kubelet[2786]: E1030 00:00:57.682179 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-595np_kube-system(36714a96-4960-46bd-99ac-7641ec2c1cb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-595np_kube-system(36714a96-4960-46bd-99ac-7641ec2c1cb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad8991691a7ffb0c22c133cda058af9eb35e2a8206b86f2ce38eb21966c5b198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-595np" podUID="36714a96-4960-46bd-99ac-7641ec2c1cb1" Oct 30 00:00:57.685249 containerd[1621]: time="2025-10-30T00:00:57.685157562Z" level=error msg="Failed to destroy network for sandbox \"27fa18b5f68d220acf4f390cfe8e13ba202db8bc100a1762fac31acc6b118557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.686791 containerd[1621]: time="2025-10-30T00:00:57.686737617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58ffc9d69c-lqh8p,Uid:46c4714f-b151-44c7-998c-22f0b492d68d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27fa18b5f68d220acf4f390cfe8e13ba202db8bc100a1762fac31acc6b118557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.686980 kubelet[2786]: E1030 00:00:57.686928 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27fa18b5f68d220acf4f390cfe8e13ba202db8bc100a1762fac31acc6b118557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.687177 kubelet[2786]: E1030 00:00:57.686993 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27fa18b5f68d220acf4f390cfe8e13ba202db8bc100a1762fac31acc6b118557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58ffc9d69c-lqh8p" Oct 30 00:00:57.687177 kubelet[2786]: E1030 00:00:57.687018 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27fa18b5f68d220acf4f390cfe8e13ba202db8bc100a1762fac31acc6b118557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58ffc9d69c-lqh8p" Oct 30 00:00:57.687177 kubelet[2786]: E1030 00:00:57.687063 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58ffc9d69c-lqh8p_calico-system(46c4714f-b151-44c7-998c-22f0b492d68d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58ffc9d69c-lqh8p_calico-system(46c4714f-b151-44c7-998c-22f0b492d68d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27fa18b5f68d220acf4f390cfe8e13ba202db8bc100a1762fac31acc6b118557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58ffc9d69c-lqh8p" podUID="46c4714f-b151-44c7-998c-22f0b492d68d" Oct 30 00:00:57.702725 containerd[1621]: time="2025-10-30T00:00:57.702664495Z" level=error msg="Failed to destroy network for sandbox \"aa03d8ae44b85e8129498789932d62be992d5bb682d1991c681a1bb8818e8dcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.703606 containerd[1621]: time="2025-10-30T00:00:57.703558008Z" level=error msg="Failed to destroy network for sandbox \"2f011cdf2959ed1a410d5350c9e3093035efa946839b97d38176d884a5f7488a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.704247 containerd[1621]: time="2025-10-30T00:00:57.704187795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qgxmh,Uid:68bb771e-4dde-43d3-80f7-8e8958576aed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa03d8ae44b85e8129498789932d62be992d5bb682d1991c681a1bb8818e8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.704483 kubelet[2786]: E1030 00:00:57.704428 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa03d8ae44b85e8129498789932d62be992d5bb682d1991c681a1bb8818e8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.704577 kubelet[2786]: E1030 00:00:57.704492 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa03d8ae44b85e8129498789932d62be992d5bb682d1991c681a1bb8818e8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qgxmh" Oct 30 00:00:57.704577 kubelet[2786]: E1030 00:00:57.704513 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa03d8ae44b85e8129498789932d62be992d5bb682d1991c681a1bb8818e8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qgxmh" Oct 30 00:00:57.704577 kubelet[2786]: E1030 00:00:57.704554 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qgxmh_calico-system(68bb771e-4dde-43d3-80f7-8e8958576aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qgxmh_calico-system(68bb771e-4dde-43d3-80f7-8e8958576aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa03d8ae44b85e8129498789932d62be992d5bb682d1991c681a1bb8818e8dcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:00:57.705298 containerd[1621]: time="2025-10-30T00:00:57.705245082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6vcm7,Uid:0bb0dc31-1db0-483e-b0fa-e4d89369c901,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f011cdf2959ed1a410d5350c9e3093035efa946839b97d38176d884a5f7488a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.705477 kubelet[2786]: E1030 00:00:57.705414 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f011cdf2959ed1a410d5350c9e3093035efa946839b97d38176d884a5f7488a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:57.705556 kubelet[2786]: E1030 00:00:57.705513 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f011cdf2959ed1a410d5350c9e3093035efa946839b97d38176d884a5f7488a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" Oct 30 00:00:57.705556 kubelet[2786]: E1030 00:00:57.705539 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f011cdf2959ed1a410d5350c9e3093035efa946839b97d38176d884a5f7488a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" Oct 30 00:00:57.705612 kubelet[2786]: E1030 00:00:57.705586 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f011cdf2959ed1a410d5350c9e3093035efa946839b97d38176d884a5f7488a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:00:57.723711 kubelet[2786]: E1030 00:00:57.723600 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:00:57.724387 containerd[1621]: time="2025-10-30T00:00:57.724361925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:00:58.625508 systemd[1]: Created slice kubepods-besteffort-pod68445617_ec60_49e4_ab10_bde455e7ecc9.slice - libcontainer container kubepods-besteffort-pod68445617_ec60_49e4_ab10_bde455e7ecc9.slice. Oct 30 00:00:58.628703 containerd[1621]: time="2025-10-30T00:00:58.628385828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjkt6,Uid:68445617-ec60-49e4-ab10-bde455e7ecc9,Namespace:calico-system,Attempt:0,}" Oct 30 00:00:58.705317 containerd[1621]: time="2025-10-30T00:00:58.705243120Z" level=error msg="Failed to destroy network for sandbox \"9c316a922c1c9044d54f28a86f0097d5b15dc0a762caf79787e430451a0a5aa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:58.708917 systemd[1]: run-netns-cni\x2d57f71d4b\x2d8601\x2ddf6a\x2d963e\x2d31bdc71df4e3.mount: Deactivated successfully. Oct 30 00:00:58.729232 containerd[1621]: time="2025-10-30T00:00:58.729139489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjkt6,Uid:68445617-ec60-49e4-ab10-bde455e7ecc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c316a922c1c9044d54f28a86f0097d5b15dc0a762caf79787e430451a0a5aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:58.729550 kubelet[2786]: E1030 00:00:58.729495 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c316a922c1c9044d54f28a86f0097d5b15dc0a762caf79787e430451a0a5aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:00:58.730072 kubelet[2786]: E1030 00:00:58.729583 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c316a922c1c9044d54f28a86f0097d5b15dc0a762caf79787e430451a0a5aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:58.730072 kubelet[2786]: E1030 00:00:58.729613 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c316a922c1c9044d54f28a86f0097d5b15dc0a762caf79787e430451a0a5aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:00:58.730072 kubelet[2786]: E1030 00:00:58.729683 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c316a922c1c9044d54f28a86f0097d5b15dc0a762caf79787e430451a0a5aa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:08.477243 kubelet[2786]: I1030 00:01:08.476981 2786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:01:08.478585 kubelet[2786]: E1030 00:01:08.477743 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:08.619767 kubelet[2786]: E1030 00:01:08.619705 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:08.623524 containerd[1621]: time="2025-10-30T00:01:08.621490653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-595np,Uid:36714a96-4960-46bd-99ac-7641ec2c1cb1,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:08.624050 containerd[1621]: time="2025-10-30T00:01:08.622010780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6vcm7,Uid:0bb0dc31-1db0-483e-b0fa-e4d89369c901,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:01:08.750867 kubelet[2786]: E1030 00:01:08.750713 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:09.024426 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:42780.service - OpenSSH per-connection server daemon (10.0.0.1:42780). Oct 30 00:01:09.124896 containerd[1621]: time="2025-10-30T00:01:09.124630624Z" level=error msg="Failed to destroy network for sandbox \"a6f7f2d0388e7dce6ea8e67fc582ddc4ed49281b1e931bd20a05cd52059ab8ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.127627 systemd[1]: run-netns-cni\x2d0e7aa2e1\x2dc69b\x2df943\x2d1e14\x2dfae839868974.mount: Deactivated successfully. Oct 30 00:01:09.139569 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 42780 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:09.142063 sshd-session[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:09.148873 containerd[1621]: time="2025-10-30T00:01:09.148727539Z" level=error msg="Failed to destroy network for sandbox \"3649a51cb497bf71f7a9a16ebe241c2e1963b2d5d615e8156235382c5e4645ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.153121 systemd[1]: run-netns-cni\x2db46128e9\x2dbfcd\x2d36f0\x2d1949\x2d72305ecee048.mount: Deactivated successfully. Oct 30 00:01:09.184853 systemd-logind[1592]: New session 10 of user core. Oct 30 00:01:09.193273 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:01:09.336159 containerd[1621]: time="2025-10-30T00:01:09.335955795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-595np,Uid:36714a96-4960-46bd-99ac-7641ec2c1cb1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f7f2d0388e7dce6ea8e67fc582ddc4ed49281b1e931bd20a05cd52059ab8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.336604 kubelet[2786]: E1030 00:01:09.336420 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f7f2d0388e7dce6ea8e67fc582ddc4ed49281b1e931bd20a05cd52059ab8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.336604 kubelet[2786]: E1030 00:01:09.336494 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f7f2d0388e7dce6ea8e67fc582ddc4ed49281b1e931bd20a05cd52059ab8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-595np" Oct 30 00:01:09.336604 kubelet[2786]: E1030 00:01:09.336526 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f7f2d0388e7dce6ea8e67fc582ddc4ed49281b1e931bd20a05cd52059ab8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-595np" Oct 30 00:01:09.337337 kubelet[2786]: E1030 00:01:09.336682 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-595np_kube-system(36714a96-4960-46bd-99ac-7641ec2c1cb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-595np_kube-system(36714a96-4960-46bd-99ac-7641ec2c1cb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6f7f2d0388e7dce6ea8e67fc582ddc4ed49281b1e931bd20a05cd52059ab8ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-595np" podUID="36714a96-4960-46bd-99ac-7641ec2c1cb1" Oct 30 00:01:09.379413 containerd[1621]: time="2025-10-30T00:01:09.379325948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6vcm7,Uid:0bb0dc31-1db0-483e-b0fa-e4d89369c901,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3649a51cb497bf71f7a9a16ebe241c2e1963b2d5d615e8156235382c5e4645ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.379765 kubelet[2786]: E1030 00:01:09.379578 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3649a51cb497bf71f7a9a16ebe241c2e1963b2d5d615e8156235382c5e4645ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.379765 kubelet[2786]: E1030 00:01:09.379638 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3649a51cb497bf71f7a9a16ebe241c2e1963b2d5d615e8156235382c5e4645ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" Oct 30 00:01:09.379765 kubelet[2786]: E1030 00:01:09.379665 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3649a51cb497bf71f7a9a16ebe241c2e1963b2d5d615e8156235382c5e4645ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" Oct 30 00:01:09.379987 kubelet[2786]: E1030 00:01:09.379704 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3649a51cb497bf71f7a9a16ebe241c2e1963b2d5d615e8156235382c5e4645ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:01:09.388348 sshd[3957]: Connection closed by 10.0.0.1 port 42780 Oct 30 00:01:09.389071 sshd-session[3914]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:09.394420 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:01:09.394798 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:42780.service: Deactivated successfully. Oct 30 00:01:09.397735 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:01:09.400210 systemd-logind[1592]: Removed session 10. Oct 30 00:01:09.622456 containerd[1621]: time="2025-10-30T00:01:09.621667409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c88c4ddc-bshzv,Uid:1f252106-e865-42bc-bcfa-ce876455a870,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:09.622657 containerd[1621]: time="2025-10-30T00:01:09.622462116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjkt6,Uid:68445617-ec60-49e4-ab10-bde455e7ecc9,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:09.840303 containerd[1621]: time="2025-10-30T00:01:09.840234361Z" level=error msg="Failed to destroy network for sandbox \"e03ca03b786094a61d4b8df8edc34e17d5e70d90132bbb2f137836ea2263b98c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.842026 containerd[1621]: time="2025-10-30T00:01:09.841982821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c88c4ddc-bshzv,Uid:1f252106-e865-42bc-bcfa-ce876455a870,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e03ca03b786094a61d4b8df8edc34e17d5e70d90132bbb2f137836ea2263b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.842914 kubelet[2786]: E1030 00:01:09.842330 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e03ca03b786094a61d4b8df8edc34e17d5e70d90132bbb2f137836ea2263b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.842914 kubelet[2786]: E1030 00:01:09.842438 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e03ca03b786094a61d4b8df8edc34e17d5e70d90132bbb2f137836ea2263b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" Oct 30 00:01:09.842914 kubelet[2786]: E1030 00:01:09.842466 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e03ca03b786094a61d4b8df8edc34e17d5e70d90132bbb2f137836ea2263b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" Oct 30 00:01:09.843319 kubelet[2786]: E1030 00:01:09.842524 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e03ca03b786094a61d4b8df8edc34e17d5e70d90132bbb2f137836ea2263b98c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:01:09.861589 containerd[1621]: time="2025-10-30T00:01:09.861530461Z" level=error msg="Failed to destroy network for sandbox \"dab62870a38f42821b3fc72037f9319c50353ae9814d488e647592d46a571ad3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.863110 containerd[1621]: time="2025-10-30T00:01:09.862960839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjkt6,Uid:68445617-ec60-49e4-ab10-bde455e7ecc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab62870a38f42821b3fc72037f9319c50353ae9814d488e647592d46a571ad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.863286 kubelet[2786]: E1030 00:01:09.863246 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab62870a38f42821b3fc72037f9319c50353ae9814d488e647592d46a571ad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:01:09.863351 kubelet[2786]: E1030 00:01:09.863307 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab62870a38f42821b3fc72037f9319c50353ae9814d488e647592d46a571ad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:01:09.863351 kubelet[2786]: E1030 00:01:09.863329 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab62870a38f42821b3fc72037f9319c50353ae9814d488e647592d46a571ad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kjkt6" Oct 30 00:01:09.863457 kubelet[2786]: E1030 00:01:09.863374 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dab62870a38f42821b3fc72037f9319c50353ae9814d488e647592d46a571ad3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:09.981907 systemd[1]: run-netns-cni\x2d057d2e18\x2db68e\x2d80c6\x2dba7c\x2d37e23bc7fd70.mount: Deactivated successfully. Oct 30 00:01:09.993381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753080534.mount: Deactivated successfully. Oct 30 00:01:10.022716 containerd[1621]: time="2025-10-30T00:01:10.022636326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:10.023515 containerd[1621]: time="2025-10-30T00:01:10.023464446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:01:10.024644 containerd[1621]: time="2025-10-30T00:01:10.024607982Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:10.026469 containerd[1621]: time="2025-10-30T00:01:10.026424570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:10.026955 containerd[1621]: time="2025-10-30T00:01:10.026911405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.302517411s" Oct 30 00:01:10.026955 containerd[1621]: time="2025-10-30T00:01:10.026941922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:01:10.038503 containerd[1621]: time="2025-10-30T00:01:10.038305493Z" level=info msg="CreateContainer within sandbox \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:01:10.057678 containerd[1621]: time="2025-10-30T00:01:10.057640386Z" level=info msg="Container 8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:10.070177 containerd[1621]: time="2025-10-30T00:01:10.070112538Z" level=info msg="CreateContainer within sandbox \"22bd6c0dc49780ddc4bcab24912b57ce6d250021424864127b1cf1b3a7cef709\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\"" Oct 30 00:01:10.070772 containerd[1621]: time="2025-10-30T00:01:10.070705069Z" level=info msg="StartContainer for \"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\"" Oct 30 00:01:10.072606 containerd[1621]: time="2025-10-30T00:01:10.072566441Z" level=info msg="connecting to shim 8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0" address="unix:///run/containerd/s/4e5072a8f942bfa400cb66676444ace2c6df32edea36f6f1e45aebdf12e1b989" protocol=ttrpc version=3 Oct 30 00:01:10.183415 systemd[1]: Started cri-containerd-8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0.scope - libcontainer container 8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0. Oct 30 00:01:10.260688 containerd[1621]: time="2025-10-30T00:01:10.260293248Z" level=info msg="StartContainer for \"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\" returns successfully" Oct 30 00:01:10.332265 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:01:10.332437 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:01:10.470291 kubelet[2786]: I1030 00:01:10.470219 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-ca-bundle\") pod \"46c4714f-b151-44c7-998c-22f0b492d68d\" (UID: \"46c4714f-b151-44c7-998c-22f0b492d68d\") " Oct 30 00:01:10.470291 kubelet[2786]: I1030 00:01:10.470291 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-backend-key-pair\") pod \"46c4714f-b151-44c7-998c-22f0b492d68d\" (UID: \"46c4714f-b151-44c7-998c-22f0b492d68d\") " Oct 30 00:01:10.470291 kubelet[2786]: I1030 00:01:10.470313 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rwj\" (UniqueName: \"kubernetes.io/projected/46c4714f-b151-44c7-998c-22f0b492d68d-kube-api-access-n9rwj\") pod \"46c4714f-b151-44c7-998c-22f0b492d68d\" (UID: \"46c4714f-b151-44c7-998c-22f0b492d68d\") " Oct 30 00:01:10.471447 kubelet[2786]: I1030 00:01:10.471346 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "46c4714f-b151-44c7-998c-22f0b492d68d" (UID: "46c4714f-b151-44c7-998c-22f0b492d68d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:01:10.474676 kubelet[2786]: I1030 00:01:10.474576 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "46c4714f-b151-44c7-998c-22f0b492d68d" (UID: "46c4714f-b151-44c7-998c-22f0b492d68d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:01:10.475265 kubelet[2786]: I1030 00:01:10.475224 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c4714f-b151-44c7-998c-22f0b492d68d-kube-api-access-n9rwj" (OuterVolumeSpecName: "kube-api-access-n9rwj") pod "46c4714f-b151-44c7-998c-22f0b492d68d" (UID: "46c4714f-b151-44c7-998c-22f0b492d68d"). InnerVolumeSpecName "kube-api-access-n9rwj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:01:10.570789 kubelet[2786]: I1030 00:01:10.570615 2786 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 00:01:10.570789 kubelet[2786]: I1030 00:01:10.570667 2786 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46c4714f-b151-44c7-998c-22f0b492d68d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 00:01:10.570789 kubelet[2786]: I1030 00:01:10.570685 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n9rwj\" (UniqueName: \"kubernetes.io/projected/46c4714f-b151-44c7-998c-22f0b492d68d-kube-api-access-n9rwj\") on node \"localhost\" DevicePath \"\"" Oct 30 00:01:10.617523 kubelet[2786]: E1030 00:01:10.617477 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:10.618580 containerd[1621]: time="2025-10-30T00:01:10.618226264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shcqz,Uid:e179f99f-26b2-4c6c-96ed-bff21a0c48d7,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:10.618580 containerd[1621]: time="2025-10-30T00:01:10.618284032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6h8nd,Uid:5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:01:10.767124 kubelet[2786]: E1030 00:01:10.765660 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:10.778898 systemd[1]: Removed slice kubepods-besteffort-pod46c4714f_b151_44c7_998c_22f0b492d68d.slice - libcontainer container kubepods-besteffort-pod46c4714f_b151_44c7_998c_22f0b492d68d.slice. Oct 30 00:01:10.791769 kubelet[2786]: I1030 00:01:10.791427 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-877q2" podStartSLOduration=1.214197436 podStartE2EDuration="31.79141016s" podCreationTimestamp="2025-10-30 00:00:39 +0000 UTC" firstStartedPulling="2025-10-30 00:00:39.450455579 +0000 UTC m=+21.921919122" lastFinishedPulling="2025-10-30 00:01:10.027668303 +0000 UTC m=+52.499131846" observedRunningTime="2025-10-30 00:01:10.788454153 +0000 UTC m=+53.259917716" watchObservedRunningTime="2025-10-30 00:01:10.79141016 +0000 UTC m=+53.262873713" Oct 30 00:01:10.854362 systemd[1]: Created slice kubepods-besteffort-pod5d1c4010_0c3f_4d18_a5fa_aaa1ffb9aacc.slice - libcontainer container kubepods-besteffort-pod5d1c4010_0c3f_4d18_a5fa_aaa1ffb9aacc.slice. Oct 30 00:01:10.873327 kubelet[2786]: I1030 00:01:10.873162 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc-whisker-backend-key-pair\") pod \"whisker-77dd57745-5wrzz\" (UID: \"5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc\") " pod="calico-system/whisker-77dd57745-5wrzz" Oct 30 00:01:10.874350 kubelet[2786]: I1030 00:01:10.874314 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc-whisker-ca-bundle\") pod \"whisker-77dd57745-5wrzz\" (UID: \"5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc\") " pod="calico-system/whisker-77dd57745-5wrzz" Oct 30 00:01:10.874350 kubelet[2786]: I1030 00:01:10.874340 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bpv\" (UniqueName: \"kubernetes.io/projected/5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc-kube-api-access-r6bpv\") pod \"whisker-77dd57745-5wrzz\" (UID: \"5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc\") " pod="calico-system/whisker-77dd57745-5wrzz" Oct 30 00:01:10.897066 systemd-networkd[1501]: cali04b603083d6: Link UP Oct 30 00:01:10.898130 systemd-networkd[1501]: cali04b603083d6: Gained carrier Oct 30 00:01:10.925638 containerd[1621]: 2025-10-30 00:01:10.645 [INFO][4098] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:01:10.925638 containerd[1621]: 2025-10-30 00:01:10.672 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--shcqz-eth0 coredns-668d6bf9bc- kube-system e179f99f-26b2-4c6c-96ed-bff21a0c48d7 842 0 2025-10-30 00:00:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-shcqz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04b603083d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-" Oct 30 00:01:10.925638 containerd[1621]: 2025-10-30 00:01:10.672 [INFO][4098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.925638 containerd[1621]: 2025-10-30 00:01:10.756 [INFO][4112] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" HandleID="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Workload="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.760 [INFO][4112] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" HandleID="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Workload="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000343230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-shcqz", "timestamp":"2025-10-30 00:01:10.756752243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.760 [INFO][4112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.760 [INFO][4112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.761 [INFO][4112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.797 [INFO][4112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" host="localhost" Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.809 [INFO][4112] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.831 [INFO][4112] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.836 [INFO][4112] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.851 [INFO][4112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:10.926291 containerd[1621]: 2025-10-30 00:01:10.851 [INFO][4112] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" host="localhost" Oct 30 00:01:10.926580 containerd[1621]: 2025-10-30 00:01:10.857 [INFO][4112] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586 Oct 30 00:01:10.926580 containerd[1621]: 2025-10-30 00:01:10.863 [INFO][4112] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" host="localhost" Oct 30 00:01:10.926580 containerd[1621]: 2025-10-30 00:01:10.871 [INFO][4112] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" host="localhost" Oct 30 00:01:10.926580 containerd[1621]: 2025-10-30 00:01:10.871 [INFO][4112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" host="localhost" Oct 30 00:01:10.926580 containerd[1621]: 2025-10-30 00:01:10.871 [INFO][4112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:10.926580 containerd[1621]: 2025-10-30 00:01:10.872 [INFO][4112] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" HandleID="k8s-pod-network.7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Workload="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.926751 containerd[1621]: 2025-10-30 00:01:10.886 [INFO][4098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--shcqz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e179f99f-26b2-4c6c-96ed-bff21a0c48d7", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-shcqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04b603083d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:10.926861 containerd[1621]: 2025-10-30 00:01:10.886 [INFO][4098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.926861 containerd[1621]: 2025-10-30 00:01:10.886 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04b603083d6 ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.926861 containerd[1621]: 2025-10-30 00:01:10.898 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.926956 containerd[1621]: 2025-10-30 00:01:10.902 [INFO][4098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--shcqz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e179f99f-26b2-4c6c-96ed-bff21a0c48d7", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586", Pod:"coredns-668d6bf9bc-shcqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04b603083d6", MAC:"fe:c9:07:f7:3c:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:10.926956 containerd[1621]: 2025-10-30 00:01:10.921 [INFO][4098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" Namespace="kube-system" Pod="coredns-668d6bf9bc-shcqz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--shcqz-eth0" Oct 30 00:01:10.956197 systemd-networkd[1501]: califf44ff48c95: Link UP Oct 30 00:01:10.957310 systemd-networkd[1501]: califf44ff48c95: Gained carrier Oct 30 00:01:10.982461 systemd[1]: var-lib-kubelet-pods-46c4714f\x2db151\x2d44c7\x2d998c\x2d22f0b492d68d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn9rwj.mount: Deactivated successfully. Oct 30 00:01:10.982578 systemd[1]: var-lib-kubelet-pods-46c4714f\x2db151\x2d44c7\x2d998c\x2d22f0b492d68d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:01:10.998583 containerd[1621]: time="2025-10-30T00:01:10.998518447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\" id:\"5bdfc5c03a9c033b7aee1e0ccaa5f304e2f86241bbc8c4593445516d2506b8bf\" pid:4158 exit_status:1 exited_at:{seconds:1761782470 nanos:998036902}" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.731 [INFO][4117] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.750 [INFO][4117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0 calico-apiserver-565d8bbfcd- calico-apiserver 5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa 840 0 2025-10-30 00:00:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565d8bbfcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-565d8bbfcd-6h8nd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf44ff48c95 [] [] }} ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.752 [INFO][4117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.798 [INFO][4135] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" HandleID="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Workload="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.798 [INFO][4135] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" HandleID="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Workload="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-565d8bbfcd-6h8nd", "timestamp":"2025-10-30 00:01:10.798427909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.798 [INFO][4135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.872 [INFO][4135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.872 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.891 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.911 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.921 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.926 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.929 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.929 [INFO][4135] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.932 [INFO][4135] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485 Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.941 [INFO][4135] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.949 [INFO][4135] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.949 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" host="localhost" Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.949 [INFO][4135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:11.121255 containerd[1621]: 2025-10-30 00:01:10.949 [INFO][4135] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" HandleID="k8s-pod-network.6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Workload="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.122049 containerd[1621]: 2025-10-30 00:01:10.953 [INFO][4117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0", GenerateName:"calico-apiserver-565d8bbfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565d8bbfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-565d8bbfcd-6h8nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf44ff48c95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:11.122049 containerd[1621]: 2025-10-30 00:01:10.953 [INFO][4117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.122049 containerd[1621]: 2025-10-30 00:01:10.953 [INFO][4117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf44ff48c95 ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.122049 containerd[1621]: 2025-10-30 00:01:10.957 [INFO][4117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.122049 containerd[1621]: 2025-10-30 00:01:10.958 [INFO][4117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0", GenerateName:"calico-apiserver-565d8bbfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565d8bbfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485", Pod:"calico-apiserver-565d8bbfcd-6h8nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf44ff48c95", MAC:"a2:48:c1:ec:25:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:11.122049 containerd[1621]: 2025-10-30 00:01:11.116 [INFO][4117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6h8nd" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6h8nd-eth0" Oct 30 00:01:11.160610 containerd[1621]: time="2025-10-30T00:01:11.160552729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77dd57745-5wrzz,Uid:5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:11.179198 containerd[1621]: time="2025-10-30T00:01:11.179094417Z" level=info msg="connecting to shim 6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485" address="unix:///run/containerd/s/6ef1c04080cb6eb72e4b6c327f62c1e78dcda87127ee7e51b94cd17df738a41a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:11.182340 containerd[1621]: time="2025-10-30T00:01:11.182289450Z" level=info msg="connecting to shim 7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586" address="unix:///run/containerd/s/bfca4a7b95ef0a1936501fa69a2fdf06b40a5110bccb69e7fe35e62ef4bb1e06" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:11.209346 systemd[1]: Started cri-containerd-6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485.scope - libcontainer container 6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485. Oct 30 00:01:11.232587 systemd[1]: Started cri-containerd-7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586.scope - libcontainer container 7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586. Oct 30 00:01:11.241507 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:11.259589 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:11.299828 systemd-networkd[1501]: calib5336c203c1: Link UP Oct 30 00:01:11.301208 systemd-networkd[1501]: calib5336c203c1: Gained carrier Oct 30 00:01:11.413147 containerd[1621]: time="2025-10-30T00:01:11.411597831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6h8nd,Uid:5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6dd8af1aac54ddcbe6811ebdd70c62cd5137ec6c8ed61f70868e9a71d2d20485\"" Oct 30 00:01:11.414873 containerd[1621]: time="2025-10-30T00:01:11.413945218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shcqz,Uid:e179f99f-26b2-4c6c-96ed-bff21a0c48d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586\"" Oct 30 00:01:11.417363 containerd[1621]: time="2025-10-30T00:01:11.417301100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:01:11.418073 kubelet[2786]: E1030 00:01:11.417577 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:11.420895 containerd[1621]: time="2025-10-30T00:01:11.420844380Z" level=info msg="CreateContainer within sandbox \"7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.196 [INFO][4194] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.210 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77dd57745--5wrzz-eth0 whisker-77dd57745- calico-system 5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc 976 0 2025-10-30 00:01:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77dd57745 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77dd57745-5wrzz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib5336c203c1 [] [] }} ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.210 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.239 [INFO][4264] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" HandleID="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Workload="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.240 [INFO][4264] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" HandleID="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Workload="localhost-k8s-whisker--77dd57745--5wrzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c0b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77dd57745-5wrzz", "timestamp":"2025-10-30 00:01:11.239814886 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.240 [INFO][4264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.240 [INFO][4264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.240 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.249 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.254 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.260 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.263 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.266 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.266 [INFO][4264] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.267 [INFO][4264] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.272 [INFO][4264] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.281 [INFO][4264] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.281 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" host="localhost" Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.281 [INFO][4264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:11.428199 containerd[1621]: 2025-10-30 00:01:11.282 [INFO][4264] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" HandleID="k8s-pod-network.a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Workload="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.428932 containerd[1621]: 2025-10-30 00:01:11.289 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77dd57745--5wrzz-eth0", GenerateName:"whisker-77dd57745-", Namespace:"calico-system", SelfLink:"", UID:"5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77dd57745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77dd57745-5wrzz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib5336c203c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:11.428932 containerd[1621]: 2025-10-30 00:01:11.290 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.428932 containerd[1621]: 2025-10-30 00:01:11.290 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5336c203c1 ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.428932 containerd[1621]: 2025-10-30 00:01:11.301 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.428932 containerd[1621]: 2025-10-30 00:01:11.302 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77dd57745--5wrzz-eth0", GenerateName:"whisker-77dd57745-", Namespace:"calico-system", SelfLink:"", UID:"5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77dd57745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c", Pod:"whisker-77dd57745-5wrzz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib5336c203c1", MAC:"aa:44:1b:6d:dc:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:11.428932 containerd[1621]: 2025-10-30 00:01:11.415 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" Namespace="calico-system" Pod="whisker-77dd57745-5wrzz" WorkloadEndpoint="localhost-k8s-whisker--77dd57745--5wrzz-eth0" Oct 30 00:01:11.439216 containerd[1621]: time="2025-10-30T00:01:11.439170587Z" level=info msg="Container 7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:11.445394 containerd[1621]: time="2025-10-30T00:01:11.445350071Z" level=info msg="CreateContainer within sandbox \"7ac0d647d1199b4f4b1db900cb30d206f77d554aceb764f5976d181686b3e586\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04\"" Oct 30 00:01:11.446114 containerd[1621]: time="2025-10-30T00:01:11.445813433Z" level=info msg="StartContainer for \"7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04\"" Oct 30 00:01:11.446639 containerd[1621]: time="2025-10-30T00:01:11.446602651Z" level=info msg="connecting to shim 7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04" address="unix:///run/containerd/s/bfca4a7b95ef0a1936501fa69a2fdf06b40a5110bccb69e7fe35e62ef4bb1e06" protocol=ttrpc version=3 Oct 30 00:01:11.453051 containerd[1621]: time="2025-10-30T00:01:11.452997685Z" level=info msg="connecting to shim a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c" address="unix:///run/containerd/s/5c6b61e67d575f4ff2e33674f1eb798ed0390fee6ed359716581a034287524a2" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:11.467224 systemd[1]: Started cri-containerd-7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04.scope - libcontainer container 7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04. Oct 30 00:01:11.482620 systemd[1]: Started cri-containerd-a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c.scope - libcontainer container a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c. Oct 30 00:01:11.500675 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:11.517152 containerd[1621]: time="2025-10-30T00:01:11.517045483Z" level=info msg="StartContainer for \"7a00d84267cada607ee9f445af8125a764c642c49ebb398506a0c02890262b04\" returns successfully" Oct 30 00:01:11.534783 containerd[1621]: time="2025-10-30T00:01:11.534700803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77dd57745-5wrzz,Uid:5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6ec82ab9d2c29f585cf8b1995cd28f5a6591ddcc3f327c2bbaf0d970f07fd3c\"" Oct 30 00:01:11.625152 kubelet[2786]: I1030 00:01:11.624516 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c4714f-b151-44c7-998c-22f0b492d68d" path="/var/lib/kubelet/pods/46c4714f-b151-44c7-998c-22f0b492d68d/volumes" Oct 30 00:01:11.774735 kubelet[2786]: E1030 00:01:11.774680 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:11.779525 kubelet[2786]: E1030 00:01:11.779492 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:11.818065 containerd[1621]: time="2025-10-30T00:01:11.818000097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:11.820633 kubelet[2786]: I1030 00:01:11.819997 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-shcqz" podStartSLOduration=47.819976965 podStartE2EDuration="47.819976965s" podCreationTimestamp="2025-10-30 00:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:01:11.799143574 +0000 UTC m=+54.270607117" watchObservedRunningTime="2025-10-30 00:01:11.819976965 +0000 UTC m=+54.291440508" Oct 30 00:01:11.883383 containerd[1621]: time="2025-10-30T00:01:11.883312398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:11.889810 containerd[1621]: time="2025-10-30T00:01:11.889242218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:01:11.891983 kubelet[2786]: E1030 00:01:11.890479 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:11.891983 kubelet[2786]: E1030 00:01:11.890524 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:11.894992 containerd[1621]: time="2025-10-30T00:01:11.894889121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:01:11.909086 kubelet[2786]: E1030 00:01:11.908981 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff2lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6h8nd_calico-apiserver(5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:11.910885 kubelet[2786]: E1030 00:01:11.910815 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:01:11.984758 containerd[1621]: time="2025-10-30T00:01:11.984684059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\" id:\"d194d6fffa817630f570a3fd05c3e4f5752e3dec7abd504eb79ea5ae1b1056bb\" pid:4493 exit_status:1 exited_at:{seconds:1761782471 nanos:984141930}" Oct 30 00:01:12.259830 systemd-networkd[1501]: vxlan.calico: Link UP Oct 30 00:01:12.259842 systemd-networkd[1501]: vxlan.calico: Gained carrier Oct 30 00:01:12.302661 containerd[1621]: time="2025-10-30T00:01:12.302607991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:12.303854 containerd[1621]: time="2025-10-30T00:01:12.303817451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:01:12.303924 containerd[1621]: time="2025-10-30T00:01:12.303898140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:01:12.304174 kubelet[2786]: E1030 00:01:12.304112 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:01:12.304319 kubelet[2786]: E1030 00:01:12.304180 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:01:12.304426 kubelet[2786]: E1030 00:01:12.304317 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:409e14720a524b50ad4f5846391334f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:12.306768 containerd[1621]: time="2025-10-30T00:01:12.306736101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:01:12.538321 systemd-networkd[1501]: calib5336c203c1: Gained IPv6LL Oct 30 00:01:12.618320 containerd[1621]: time="2025-10-30T00:01:12.618266906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qgxmh,Uid:68bb771e-4dde-43d3-80f7-8e8958576aed,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:12.733221 systemd-networkd[1501]: calib92c4620128: Link UP Oct 30 00:01:12.734431 systemd-networkd[1501]: calib92c4620128: Gained carrier Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.663 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--qgxmh-eth0 goldmane-666569f655- calico-system 68bb771e-4dde-43d3-80f7-8e8958576aed 844 0 2025-10-30 00:00:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-qgxmh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib92c4620128 [] [] }} ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.663 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.691 [INFO][4631] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" HandleID="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Workload="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.692 [INFO][4631] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" HandleID="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Workload="localhost-k8s-goldmane--666569f655--qgxmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-qgxmh", "timestamp":"2025-10-30 00:01:12.691891613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.692 [INFO][4631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.692 [INFO][4631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.692 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.700 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.706 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.710 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.712 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.714 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.714 [INFO][4631] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.716 [INFO][4631] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72 Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.720 [INFO][4631] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.725 [INFO][4631] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.725 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" host="localhost" Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.725 [INFO][4631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:12.751907 containerd[1621]: 2025-10-30 00:01:12.725 [INFO][4631] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" HandleID="k8s-pod-network.6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Workload="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.752903 containerd[1621]: 2025-10-30 00:01:12.729 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--qgxmh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68bb771e-4dde-43d3-80f7-8e8958576aed", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-qgxmh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib92c4620128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:12.752903 containerd[1621]: 2025-10-30 00:01:12.729 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.752903 containerd[1621]: 2025-10-30 00:01:12.729 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib92c4620128 ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.752903 containerd[1621]: 2025-10-30 00:01:12.735 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.752903 containerd[1621]: 2025-10-30 00:01:12.735 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--qgxmh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68bb771e-4dde-43d3-80f7-8e8958576aed", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72", Pod:"goldmane-666569f655-qgxmh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib92c4620128", MAC:"12:32:db:31:fa:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:12.752903 containerd[1621]: 2025-10-30 00:01:12.748 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" Namespace="calico-system" Pod="goldmane-666569f655-qgxmh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--qgxmh-eth0" Oct 30 00:01:12.777951 kubelet[2786]: E1030 00:01:12.777397 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:12.782242 kubelet[2786]: E1030 00:01:12.782190 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:01:12.791964 containerd[1621]: time="2025-10-30T00:01:12.791761097Z" level=info msg="connecting to shim 6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72" address="unix:///run/containerd/s/594f777b06c95f89c989f83ae3281534873b54ea4c2dac14f51023995b62b0eb" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:12.792264 systemd-networkd[1501]: cali04b603083d6: Gained IPv6LL Oct 30 00:01:12.795454 systemd-networkd[1501]: califf44ff48c95: Gained IPv6LL Oct 30 00:01:12.837278 systemd[1]: Started cri-containerd-6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72.scope - libcontainer container 6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72. Oct 30 00:01:12.852713 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:12.888066 containerd[1621]: time="2025-10-30T00:01:12.888012070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qgxmh,Uid:68bb771e-4dde-43d3-80f7-8e8958576aed,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bd75141c7fe913a5cf1b33914be32f2354bac8258d4a5cb2837c3c0e19aff72\"" Oct 30 00:01:13.623316 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL Oct 30 00:01:13.780362 kubelet[2786]: E1030 00:01:13.780309 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:13.842346 containerd[1621]: time="2025-10-30T00:01:13.842292496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:13.844017 containerd[1621]: time="2025-10-30T00:01:13.843973303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:01:13.844154 containerd[1621]: time="2025-10-30T00:01:13.844074572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:01:13.844359 kubelet[2786]: E1030 00:01:13.844266 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:01:13.844359 kubelet[2786]: E1030 00:01:13.844332 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:01:13.844640 kubelet[2786]: E1030 00:01:13.844568 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:13.844898 containerd[1621]: time="2025-10-30T00:01:13.844867648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:01:13.846368 kubelet[2786]: E1030 00:01:13.846284 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:01:14.214923 containerd[1621]: time="2025-10-30T00:01:14.214838990Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:14.216453 containerd[1621]: time="2025-10-30T00:01:14.216385839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:01:14.216536 containerd[1621]: time="2025-10-30T00:01:14.216421125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:14.216754 kubelet[2786]: E1030 00:01:14.216705 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:01:14.216845 kubelet[2786]: E1030 00:01:14.216768 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:01:14.216991 kubelet[2786]: E1030 00:01:14.216933 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2gnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qgxmh_calico-system(68bb771e-4dde-43d3-80f7-8e8958576aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:14.218189 kubelet[2786]: E1030 00:01:14.218147 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:01:14.327353 systemd-networkd[1501]: calib92c4620128: Gained IPv6LL Oct 30 00:01:14.404486 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:41310.service - OpenSSH per-connection server daemon (10.0.0.1:41310). Oct 30 00:01:14.482080 sshd[4701]: Accepted publickey for core from 10.0.0.1 port 41310 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:14.484303 sshd-session[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:14.489751 systemd-logind[1592]: New session 11 of user core. Oct 30 00:01:14.499280 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:01:14.642488 sshd[4704]: Connection closed by 10.0.0.1 port 41310 Oct 30 00:01:14.642833 sshd-session[4701]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:14.648637 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:41310.service: Deactivated successfully. Oct 30 00:01:14.651508 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:01:14.653963 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:01:14.654943 systemd-logind[1592]: Removed session 11. Oct 30 00:01:14.782805 kubelet[2786]: E1030 00:01:14.782666 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:01:14.784161 kubelet[2786]: E1030 00:01:14.783940 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:01:19.659020 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:41332.service - OpenSSH per-connection server daemon (10.0.0.1:41332). Oct 30 00:01:19.718885 sshd[4735]: Accepted publickey for core from 10.0.0.1 port 41332 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:19.720403 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:19.725025 systemd-logind[1592]: New session 12 of user core. Oct 30 00:01:19.731209 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:01:19.884244 sshd[4738]: Connection closed by 10.0.0.1 port 41332 Oct 30 00:01:19.884648 sshd-session[4735]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:19.889665 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:41332.service: Deactivated successfully. Oct 30 00:01:19.891869 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:01:19.892818 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:01:19.893976 systemd-logind[1592]: Removed session 12. Oct 30 00:01:20.618164 kubelet[2786]: E1030 00:01:20.617921 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:20.618671 containerd[1621]: time="2025-10-30T00:01:20.618514529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6vcm7,Uid:0bb0dc31-1db0-483e-b0fa-e4d89369c901,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:01:20.618671 containerd[1621]: time="2025-10-30T00:01:20.618584910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-595np,Uid:36714a96-4960-46bd-99ac-7641ec2c1cb1,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:20.758295 systemd-networkd[1501]: cali4843614acf8: Link UP Oct 30 00:01:20.760023 systemd-networkd[1501]: cali4843614acf8: Gained carrier Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.667 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0 calico-apiserver-565d8bbfcd- calico-apiserver 0bb0dc31-1db0-483e-b0fa-e4d89369c901 841 0 2025-10-30 00:00:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565d8bbfcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-565d8bbfcd-6vcm7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4843614acf8 [] [] }} ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.667 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.704 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" HandleID="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Workload="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.704 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" HandleID="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Workload="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-565d8bbfcd-6vcm7", "timestamp":"2025-10-30 00:01:20.704136052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.704 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.704 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.704 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.713 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.722 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.728 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.730 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.734 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.734 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.736 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.741 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.750 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.751 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" host="localhost" Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.751 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:20.780467 containerd[1621]: 2025-10-30 00:01:20.751 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" HandleID="k8s-pod-network.35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Workload="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.781282 containerd[1621]: 2025-10-30 00:01:20.754 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0", GenerateName:"calico-apiserver-565d8bbfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bb0dc31-1db0-483e-b0fa-e4d89369c901", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565d8bbfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-565d8bbfcd-6vcm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4843614acf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:20.781282 containerd[1621]: 2025-10-30 00:01:20.755 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.781282 containerd[1621]: 2025-10-30 00:01:20.755 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4843614acf8 ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.781282 containerd[1621]: 2025-10-30 00:01:20.763 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.781282 containerd[1621]: 2025-10-30 00:01:20.764 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0", GenerateName:"calico-apiserver-565d8bbfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bb0dc31-1db0-483e-b0fa-e4d89369c901", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565d8bbfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f", Pod:"calico-apiserver-565d8bbfcd-6vcm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4843614acf8", MAC:"5a:c5:57:ae:fa:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:20.781282 containerd[1621]: 2025-10-30 00:01:20.773 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" Namespace="calico-apiserver" Pod="calico-apiserver-565d8bbfcd-6vcm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565d8bbfcd--6vcm7-eth0" Oct 30 00:01:20.812054 containerd[1621]: time="2025-10-30T00:01:20.811975675Z" level=info msg="connecting to shim 35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f" address="unix:///run/containerd/s/996711d4bd02a1edc5d0aa287d7c14edd1133985a36a5dc91e36eb37a83d3277" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:20.846469 systemd[1]: Started cri-containerd-35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f.scope - libcontainer container 35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f. Oct 30 00:01:20.866759 systemd-networkd[1501]: cali6626c22cec3: Link UP Oct 30 00:01:20.867188 systemd-networkd[1501]: cali6626c22cec3: Gained carrier Oct 30 00:01:20.871001 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.667 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--595np-eth0 coredns-668d6bf9bc- kube-system 36714a96-4960-46bd-99ac-7641ec2c1cb1 834 0 2025-10-30 00:00:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-595np eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6626c22cec3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.668 [INFO][4760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.711 [INFO][4783] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" HandleID="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Workload="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.712 [INFO][4783] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" HandleID="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Workload="localhost-k8s-coredns--668d6bf9bc--595np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000128800), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-595np", "timestamp":"2025-10-30 00:01:20.711799444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.712 [INFO][4783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.751 [INFO][4783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.751 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.814 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.825 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.831 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.834 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.838 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.838 [INFO][4783] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.840 [INFO][4783] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72 Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.845 [INFO][4783] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.854 [INFO][4783] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.854 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" host="localhost" Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.854 [INFO][4783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:20.886194 containerd[1621]: 2025-10-30 00:01:20.854 [INFO][4783] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" HandleID="k8s-pod-network.2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Workload="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.887399 containerd[1621]: 2025-10-30 00:01:20.862 [INFO][4760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--595np-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36714a96-4960-46bd-99ac-7641ec2c1cb1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-595np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6626c22cec3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:20.887399 containerd[1621]: 2025-10-30 00:01:20.863 [INFO][4760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.887399 containerd[1621]: 2025-10-30 00:01:20.863 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6626c22cec3 ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.887399 containerd[1621]: 2025-10-30 00:01:20.867 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.887399 containerd[1621]: 2025-10-30 00:01:20.867 [INFO][4760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--595np-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36714a96-4960-46bd-99ac-7641ec2c1cb1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72", Pod:"coredns-668d6bf9bc-595np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6626c22cec3", MAC:"c2:b1:2a:d0:2b:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:20.887399 containerd[1621]: 2025-10-30 00:01:20.880 [INFO][4760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" Namespace="kube-system" Pod="coredns-668d6bf9bc-595np" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--595np-eth0" Oct 30 00:01:20.928334 containerd[1621]: time="2025-10-30T00:01:20.928254416Z" level=info msg="connecting to shim 2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72" address="unix:///run/containerd/s/af2a2abc753235ed9b0db87394447ac550050ca322990efc58b90a000ce3863f" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:20.967519 systemd[1]: Started cri-containerd-2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72.scope - libcontainer container 2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72. Oct 30 00:01:20.973404 containerd[1621]: time="2025-10-30T00:01:20.973305062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565d8bbfcd-6vcm7,Uid:0bb0dc31-1db0-483e-b0fa-e4d89369c901,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"35e7b7ec9367d398df9b6d1cf572ca084d1239bafb4fd0df3c4c05ef60c0ae5f\"" Oct 30 00:01:20.976151 containerd[1621]: time="2025-10-30T00:01:20.975263672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:01:20.987348 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:21.023499 containerd[1621]: time="2025-10-30T00:01:21.023446434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-595np,Uid:36714a96-4960-46bd-99ac-7641ec2c1cb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72\"" Oct 30 00:01:21.024480 kubelet[2786]: E1030 00:01:21.024439 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:21.029380 containerd[1621]: time="2025-10-30T00:01:21.029315836Z" level=info msg="CreateContainer within sandbox \"2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:01:21.041128 containerd[1621]: time="2025-10-30T00:01:21.040187352Z" level=info msg="Container 73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:21.050510 containerd[1621]: time="2025-10-30T00:01:21.050448350Z" level=info msg="CreateContainer within sandbox \"2336380402f60bd673e08d1f719d17189e77343d687e5369bee90095e3512f72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b\"" Oct 30 00:01:21.051145 containerd[1621]: time="2025-10-30T00:01:21.051109423Z" level=info msg="StartContainer for \"73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b\"" Oct 30 00:01:21.052238 containerd[1621]: time="2025-10-30T00:01:21.052204314Z" level=info msg="connecting to shim 73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b" address="unix:///run/containerd/s/af2a2abc753235ed9b0db87394447ac550050ca322990efc58b90a000ce3863f" protocol=ttrpc version=3 Oct 30 00:01:21.083421 systemd[1]: Started cri-containerd-73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b.scope - libcontainer container 73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b. Oct 30 00:01:21.123734 containerd[1621]: time="2025-10-30T00:01:21.123277116Z" level=info msg="StartContainer for \"73a13913966b2d631d17013dbc26b47ef1d348114e40a20c326fbfbc29905a3b\" returns successfully" Oct 30 00:01:21.397762 containerd[1621]: time="2025-10-30T00:01:21.397603078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:21.399669 containerd[1621]: time="2025-10-30T00:01:21.399575654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:01:21.399669 containerd[1621]: time="2025-10-30T00:01:21.399659169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:21.399969 kubelet[2786]: E1030 00:01:21.399905 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:21.399969 kubelet[2786]: E1030 00:01:21.399975 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:21.400413 kubelet[2786]: E1030 00:01:21.400163 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhffl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:21.401442 kubelet[2786]: E1030 00:01:21.401393 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:01:21.620730 containerd[1621]: time="2025-10-30T00:01:21.620671345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjkt6,Uid:68445617-ec60-49e4-ab10-bde455e7ecc9,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:21.737620 systemd-networkd[1501]: califb9ef0e87b8: Link UP Oct 30 00:01:21.738640 systemd-networkd[1501]: califb9ef0e87b8: Gained carrier Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.659 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kjkt6-eth0 csi-node-driver- calico-system 68445617-ec60-49e4-ab10-bde455e7ecc9 707 0 2025-10-30 00:00:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kjkt6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califb9ef0e87b8 [] [] }} ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.659 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.691 [INFO][4960] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" HandleID="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Workload="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.691 [INFO][4960] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" HandleID="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Workload="localhost-k8s-csi--node--driver--kjkt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kjkt6", "timestamp":"2025-10-30 00:01:21.691169455 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.691 [INFO][4960] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.691 [INFO][4960] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.691 [INFO][4960] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.699 [INFO][4960] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.704 [INFO][4960] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.711 [INFO][4960] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.713 [INFO][4960] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.716 [INFO][4960] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.716 [INFO][4960] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.718 [INFO][4960] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.723 [INFO][4960] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.730 [INFO][4960] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.730 [INFO][4960] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" host="localhost" Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.730 [INFO][4960] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:21.757351 containerd[1621]: 2025-10-30 00:01:21.730 [INFO][4960] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" HandleID="k8s-pod-network.3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Workload="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.758027 containerd[1621]: 2025-10-30 00:01:21.734 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kjkt6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68445617-ec60-49e4-ab10-bde455e7ecc9", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kjkt6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb9ef0e87b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:21.758027 containerd[1621]: 2025-10-30 00:01:21.734 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.758027 containerd[1621]: 2025-10-30 00:01:21.734 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb9ef0e87b8 ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.758027 containerd[1621]: 2025-10-30 00:01:21.739 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.758027 containerd[1621]: 2025-10-30 00:01:21.739 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kjkt6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68445617-ec60-49e4-ab10-bde455e7ecc9", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e", Pod:"csi-node-driver-kjkt6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb9ef0e87b8", MAC:"5a:cd:d3:7d:18:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:21.758027 containerd[1621]: 2025-10-30 00:01:21.753 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" Namespace="calico-system" Pod="csi-node-driver-kjkt6" WorkloadEndpoint="localhost-k8s-csi--node--driver--kjkt6-eth0" Oct 30 00:01:21.782304 containerd[1621]: time="2025-10-30T00:01:21.782238204Z" level=info msg="connecting to shim 3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e" address="unix:///run/containerd/s/c53cee6cb536f9afd15e45273aacfe416e13428e4024233e937523e9c58baab1" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:21.800575 kubelet[2786]: E1030 00:01:21.800522 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:01:21.805291 kubelet[2786]: E1030 00:01:21.805240 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:21.818658 systemd-networkd[1501]: cali4843614acf8: Gained IPv6LL Oct 30 00:01:21.829670 systemd[1]: Started cri-containerd-3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e.scope - libcontainer container 3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e. Oct 30 00:01:21.841519 kubelet[2786]: I1030 00:01:21.841433 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-595np" podStartSLOduration=57.841415761 podStartE2EDuration="57.841415761s" podCreationTimestamp="2025-10-30 00:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:01:21.84003702 +0000 UTC m=+64.311500563" watchObservedRunningTime="2025-10-30 00:01:21.841415761 +0000 UTC m=+64.312879304" Oct 30 00:01:21.852550 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:21.870720 containerd[1621]: time="2025-10-30T00:01:21.870641502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kjkt6,Uid:68445617-ec60-49e4-ab10-bde455e7ecc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ac278f57ea71925b624811fbfcf00a26e9ce43437947b63dac9ed2d35f2425e\"" Oct 30 00:01:21.873262 containerd[1621]: time="2025-10-30T00:01:21.873191687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:01:22.135343 systemd-networkd[1501]: cali6626c22cec3: Gained IPv6LL Oct 30 00:01:22.262274 containerd[1621]: time="2025-10-30T00:01:22.262202150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:22.263582 containerd[1621]: time="2025-10-30T00:01:22.263515289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:01:22.263638 containerd[1621]: time="2025-10-30T00:01:22.263530457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:01:22.263868 kubelet[2786]: E1030 00:01:22.263803 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:01:22.263868 kubelet[2786]: E1030 00:01:22.263866 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:01:22.264072 kubelet[2786]: E1030 00:01:22.264029 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78f9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:22.266042 containerd[1621]: time="2025-10-30T00:01:22.266002246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:01:22.665517 containerd[1621]: time="2025-10-30T00:01:22.665435349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:22.666699 containerd[1621]: time="2025-10-30T00:01:22.666655423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:01:22.666822 containerd[1621]: time="2025-10-30T00:01:22.666751272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:01:22.667005 kubelet[2786]: E1030 00:01:22.666937 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:01:22.667005 kubelet[2786]: E1030 00:01:22.667001 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:01:22.667228 kubelet[2786]: E1030 00:01:22.667175 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78f9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:22.668433 kubelet[2786]: E1030 00:01:22.668385 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:22.775323 systemd-networkd[1501]: califb9ef0e87b8: Gained IPv6LL Oct 30 00:01:22.808023 kubelet[2786]: E1030 00:01:22.807982 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:22.808628 kubelet[2786]: E1030 00:01:22.808583 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:01:22.808955 kubelet[2786]: E1030 00:01:22.808917 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:23.812080 kubelet[2786]: E1030 00:01:23.812014 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:24.618547 containerd[1621]: time="2025-10-30T00:01:24.618395997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c88c4ddc-bshzv,Uid:1f252106-e865-42bc-bcfa-ce876455a870,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:24.619998 containerd[1621]: time="2025-10-30T00:01:24.619310534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:01:24.735475 systemd-networkd[1501]: cali1c0df5b90aa: Link UP Oct 30 00:01:24.735745 systemd-networkd[1501]: cali1c0df5b90aa: Gained carrier Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.661 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0 calico-kube-controllers-75c88c4ddc- calico-system 1f252106-e865-42bc-bcfa-ce876455a870 838 0 2025-10-30 00:00:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75c88c4ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75c88c4ddc-bshzv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c0df5b90aa [] [] }} ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.661 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.690 [INFO][5050] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" HandleID="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Workload="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.690 [INFO][5050] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" HandleID="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Workload="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75c88c4ddc-bshzv", "timestamp":"2025-10-30 00:01:24.69075993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.691 [INFO][5050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.691 [INFO][5050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.691 [INFO][5050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.700 [INFO][5050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.707 [INFO][5050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.712 [INFO][5050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.714 [INFO][5050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.717 [INFO][5050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.717 [INFO][5050] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.719 [INFO][5050] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.723 [INFO][5050] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.730 [INFO][5050] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.730 [INFO][5050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" host="localhost" Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.730 [INFO][5050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:01:24.786560 containerd[1621]: 2025-10-30 00:01:24.730 [INFO][5050] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" HandleID="k8s-pod-network.0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Workload="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.787391 containerd[1621]: 2025-10-30 00:01:24.733 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0", GenerateName:"calico-kube-controllers-75c88c4ddc-", Namespace:"calico-system", SelfLink:"", UID:"1f252106-e865-42bc-bcfa-ce876455a870", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c88c4ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75c88c4ddc-bshzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c0df5b90aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:24.787391 containerd[1621]: 2025-10-30 00:01:24.733 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.787391 containerd[1621]: 2025-10-30 00:01:24.733 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c0df5b90aa ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.787391 containerd[1621]: 2025-10-30 00:01:24.735 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.787391 containerd[1621]: 2025-10-30 00:01:24.736 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0", GenerateName:"calico-kube-controllers-75c88c4ddc-", Namespace:"calico-system", SelfLink:"", UID:"1f252106-e865-42bc-bcfa-ce876455a870", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c88c4ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b", Pod:"calico-kube-controllers-75c88c4ddc-bshzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c0df5b90aa", MAC:"6e:80:58:da:b4:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:01:24.787391 containerd[1621]: 2025-10-30 00:01:24.783 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" Namespace="calico-system" Pod="calico-kube-controllers-75c88c4ddc-bshzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75c88c4ddc--bshzv-eth0" Oct 30 00:01:24.816349 containerd[1621]: time="2025-10-30T00:01:24.816284747Z" level=info msg="connecting to shim 0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b" address="unix:///run/containerd/s/12ed9bd4aeda6d4504947ecc8ed18cd85f6d43d95ff7bd84f5306c1010825429" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:24.847324 systemd[1]: Started cri-containerd-0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b.scope - libcontainer container 0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b. Oct 30 00:01:24.864327 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:01:24.898990 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:58548.service - OpenSSH per-connection server daemon (10.0.0.1:58548). Oct 30 00:01:24.909091 containerd[1621]: time="2025-10-30T00:01:24.908998166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c88c4ddc-bshzv,Uid:1f252106-e865-42bc-bcfa-ce876455a870,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f56d432f335822f83a6b1e0f12908feffd65e8b778da65176d9d963218d7c0b\"" Oct 30 00:01:24.996761 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 58548 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:24.999770 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:25.006869 systemd-logind[1592]: New session 13 of user core. Oct 30 00:01:25.016358 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:01:25.021931 containerd[1621]: time="2025-10-30T00:01:25.021878867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:25.024470 containerd[1621]: time="2025-10-30T00:01:25.024398206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:01:25.024532 containerd[1621]: time="2025-10-30T00:01:25.024506398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:25.025331 kubelet[2786]: E1030 00:01:25.025272 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:25.025677 kubelet[2786]: E1030 00:01:25.025338 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:25.025722 kubelet[2786]: E1030 00:01:25.025643 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff2lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6h8nd_calico-apiserver(5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:25.026125 containerd[1621]: time="2025-10-30T00:01:25.026067700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:01:25.027079 kubelet[2786]: E1030 00:01:25.027036 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:01:25.178411 sshd[5115]: Connection closed by 10.0.0.1 port 58548 Oct 30 00:01:25.178723 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:25.184724 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:58548.service: Deactivated successfully. Oct 30 00:01:25.187131 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:01:25.188040 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:01:25.189784 systemd-logind[1592]: Removed session 13. Oct 30 00:01:25.692058 containerd[1621]: time="2025-10-30T00:01:25.691976297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:25.693406 containerd[1621]: time="2025-10-30T00:01:25.693358766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:01:25.693520 containerd[1621]: time="2025-10-30T00:01:25.693460576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:01:25.693696 kubelet[2786]: E1030 00:01:25.693655 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:01:25.693775 kubelet[2786]: E1030 00:01:25.693706 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:01:25.694043 kubelet[2786]: E1030 00:01:25.693952 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwt78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:25.694204 containerd[1621]: time="2025-10-30T00:01:25.694076616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:01:25.695233 kubelet[2786]: E1030 00:01:25.695171 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:01:25.820443 kubelet[2786]: E1030 00:01:25.820385 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:01:26.042814 containerd[1621]: time="2025-10-30T00:01:26.042740823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:26.044002 containerd[1621]: time="2025-10-30T00:01:26.043952834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:01:26.044059 containerd[1621]: time="2025-10-30T00:01:26.043995905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:26.044284 kubelet[2786]: E1030 00:01:26.044230 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:01:26.044638 kubelet[2786]: E1030 00:01:26.044289 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:01:26.044638 kubelet[2786]: E1030 00:01:26.044424 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2gnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qgxmh_calico-system(68bb771e-4dde-43d3-80f7-8e8958576aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:26.045948 kubelet[2786]: E1030 00:01:26.045898 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:01:26.615402 systemd-networkd[1501]: cali1c0df5b90aa: Gained IPv6LL Oct 30 00:01:26.618943 containerd[1621]: time="2025-10-30T00:01:26.618895943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:01:26.822736 kubelet[2786]: E1030 00:01:26.822658 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:01:26.997627 containerd[1621]: time="2025-10-30T00:01:26.997534672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:27.102690 containerd[1621]: time="2025-10-30T00:01:27.102589705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:01:27.102690 containerd[1621]: time="2025-10-30T00:01:27.102666528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:01:27.102929 kubelet[2786]: E1030 00:01:27.102883 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:01:27.103318 kubelet[2786]: E1030 00:01:27.102942 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:01:27.103318 kubelet[2786]: E1030 00:01:27.103060 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:409e14720a524b50ad4f5846391334f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:27.105343 containerd[1621]: time="2025-10-30T00:01:27.105311564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:01:27.664703 containerd[1621]: time="2025-10-30T00:01:27.664620147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:27.666288 containerd[1621]: time="2025-10-30T00:01:27.666245891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:01:27.666346 containerd[1621]: time="2025-10-30T00:01:27.666276970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:01:27.666596 kubelet[2786]: E1030 00:01:27.666521 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:01:27.666596 kubelet[2786]: E1030 00:01:27.666594 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:01:27.666816 kubelet[2786]: E1030 00:01:27.666734 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:27.668257 kubelet[2786]: E1030 00:01:27.668214 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:01:30.202903 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:33058.service - OpenSSH per-connection server daemon (10.0.0.1:33058). Oct 30 00:01:30.262857 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 33058 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:30.264603 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:30.269573 systemd-logind[1592]: New session 14 of user core. Oct 30 00:01:30.279229 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:01:30.419700 sshd[5136]: Connection closed by 10.0.0.1 port 33058 Oct 30 00:01:30.420087 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:30.432203 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:33058.service: Deactivated successfully. Oct 30 00:01:30.434501 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:01:30.435593 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:01:30.439298 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:33060.service - OpenSSH per-connection server daemon (10.0.0.1:33060). Oct 30 00:01:30.440062 systemd-logind[1592]: Removed session 14. Oct 30 00:01:30.493673 sshd[5150]: Accepted publickey for core from 10.0.0.1 port 33060 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:30.495415 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:30.500695 systemd-logind[1592]: New session 15 of user core. Oct 30 00:01:30.510291 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:01:30.720146 sshd[5153]: Connection closed by 10.0.0.1 port 33060 Oct 30 00:01:30.720493 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:30.736955 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:33060.service: Deactivated successfully. Oct 30 00:01:30.739084 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:01:30.740055 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:01:30.743124 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:33062.service - OpenSSH per-connection server daemon (10.0.0.1:33062). Oct 30 00:01:30.744006 systemd-logind[1592]: Removed session 15. Oct 30 00:01:30.803719 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 33062 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:30.805302 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:30.809843 systemd-logind[1592]: New session 16 of user core. Oct 30 00:01:30.823268 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:01:30.976172 sshd[5167]: Connection closed by 10.0.0.1 port 33062 Oct 30 00:01:30.976547 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:30.981865 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:33062.service: Deactivated successfully. Oct 30 00:01:30.984090 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:01:30.985120 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:01:30.986437 systemd-logind[1592]: Removed session 16. Oct 30 00:01:34.618158 containerd[1621]: time="2025-10-30T00:01:34.618085994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:01:35.224908 containerd[1621]: time="2025-10-30T00:01:35.224772618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:35.226443 containerd[1621]: time="2025-10-30T00:01:35.226370673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:01:35.226512 containerd[1621]: time="2025-10-30T00:01:35.226479055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:35.226987 kubelet[2786]: E1030 00:01:35.226666 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:35.226987 kubelet[2786]: E1030 00:01:35.226737 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:35.226987 kubelet[2786]: E1030 00:01:35.226907 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhffl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:35.228709 kubelet[2786]: E1030 00:01:35.228617 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:01:36.005213 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:33094.service - OpenSSH per-connection server daemon (10.0.0.1:33094). Oct 30 00:01:36.080378 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 33094 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:36.082706 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:36.091878 systemd-logind[1592]: New session 17 of user core. Oct 30 00:01:36.100524 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:01:36.261066 sshd[5191]: Connection closed by 10.0.0.1 port 33094 Oct 30 00:01:36.261198 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:36.267892 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:33094.service: Deactivated successfully. Oct 30 00:01:36.271224 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:01:36.272588 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:01:36.274959 systemd-logind[1592]: Removed session 17. Oct 30 00:01:36.617559 kubelet[2786]: E1030 00:01:36.617394 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:37.618855 containerd[1621]: time="2025-10-30T00:01:37.618427317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:01:38.096651 containerd[1621]: time="2025-10-30T00:01:38.096587771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:38.337696 containerd[1621]: time="2025-10-30T00:01:38.337611966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:01:38.337900 containerd[1621]: time="2025-10-30T00:01:38.337663503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:01:38.338036 kubelet[2786]: E1030 00:01:38.337986 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:01:38.338531 kubelet[2786]: E1030 00:01:38.338050 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:01:38.338531 kubelet[2786]: E1030 00:01:38.338367 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78f9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:38.339044 containerd[1621]: time="2025-10-30T00:01:38.339014638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:01:38.619539 kubelet[2786]: E1030 00:01:38.619474 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:01:38.743374 containerd[1621]: time="2025-10-30T00:01:38.743303668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:38.884561 containerd[1621]: time="2025-10-30T00:01:38.884366612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:01:38.884561 containerd[1621]: time="2025-10-30T00:01:38.884427044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:01:38.884788 kubelet[2786]: E1030 00:01:38.884723 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:01:38.884873 kubelet[2786]: E1030 00:01:38.884794 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:01:38.885350 containerd[1621]: time="2025-10-30T00:01:38.885276933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:01:38.885596 kubelet[2786]: E1030 00:01:38.885319 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwt78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:38.886545 kubelet[2786]: E1030 00:01:38.886507 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:01:39.491466 containerd[1621]: time="2025-10-30T00:01:39.491195175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:39.619772 kubelet[2786]: E1030 00:01:39.619520 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:01:39.706316 containerd[1621]: time="2025-10-30T00:01:39.706220667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:01:39.706316 containerd[1621]: time="2025-10-30T00:01:39.706267755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:01:39.706551 kubelet[2786]: E1030 00:01:39.706507 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:01:39.706619 kubelet[2786]: E1030 00:01:39.706563 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:01:39.706753 kubelet[2786]: E1030 00:01:39.706705 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78f9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:39.708044 kubelet[2786]: E1030 00:01:39.707951 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:40.618361 kubelet[2786]: E1030 00:01:40.618310 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:01:41.277356 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:34200.service - OpenSSH per-connection server daemon (10.0.0.1:34200). Oct 30 00:01:41.332799 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 34200 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:41.334704 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:41.340200 systemd-logind[1592]: New session 18 of user core. Oct 30 00:01:41.351247 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:01:41.864938 containerd[1621]: time="2025-10-30T00:01:41.864877109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\" id:\"a1e66ef0ee36b35308b06277f61ddca953ae4bb1da28bd31650347a0aad6a0bf\" pid:5232 exited_at:{seconds:1761782501 nanos:864474375}" Oct 30 00:01:41.867925 kubelet[2786]: E1030 00:01:41.867871 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:42.745124 sshd[5208]: Connection closed by 10.0.0.1 port 34200 Oct 30 00:01:42.745456 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:42.752008 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:01:42.753272 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:34200.service: Deactivated successfully. Oct 30 00:01:42.756194 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:01:42.757699 systemd-logind[1592]: Removed session 18. Oct 30 00:01:44.617247 kubelet[2786]: E1030 00:01:44.617200 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:45.617653 kubelet[2786]: E1030 00:01:45.617554 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:47.619764 kubelet[2786]: E1030 00:01:47.619664 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:01:47.771374 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:34218.service - OpenSSH per-connection server daemon (10.0.0.1:34218). Oct 30 00:01:47.866026 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 34218 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:47.868445 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:47.874922 systemd-logind[1592]: New session 19 of user core. Oct 30 00:01:47.881381 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:01:48.020195 sshd[5258]: Connection closed by 10.0.0.1 port 34218 Oct 30 00:01:48.020562 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:48.025233 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:34218.service: Deactivated successfully. Oct 30 00:01:48.027573 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:01:48.028466 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:01:48.029755 systemd-logind[1592]: Removed session 19. Oct 30 00:01:49.620355 containerd[1621]: time="2025-10-30T00:01:49.620283436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:01:50.175019 containerd[1621]: time="2025-10-30T00:01:50.174935431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:50.333967 containerd[1621]: time="2025-10-30T00:01:50.333862430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:01:50.333967 containerd[1621]: time="2025-10-30T00:01:50.333921505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:01:50.334254 kubelet[2786]: E1030 00:01:50.334186 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:01:50.334254 kubelet[2786]: E1030 00:01:50.334245 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:01:50.334716 kubelet[2786]: E1030 00:01:50.334376 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:409e14720a524b50ad4f5846391334f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:50.336479 containerd[1621]: time="2025-10-30T00:01:50.336436418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:01:50.619185 kubelet[2786]: E1030 00:01:50.619117 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:01:50.743132 containerd[1621]: time="2025-10-30T00:01:50.743009198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:50.790490 containerd[1621]: time="2025-10-30T00:01:50.790410365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:01:50.790729 containerd[1621]: time="2025-10-30T00:01:50.790436225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:01:50.790788 kubelet[2786]: E1030 00:01:50.790661 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:01:50.790788 kubelet[2786]: E1030 00:01:50.790713 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:01:50.790997 kubelet[2786]: E1030 00:01:50.790941 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:50.791145 containerd[1621]: time="2025-10-30T00:01:50.791044170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:01:50.792426 kubelet[2786]: E1030 00:01:50.792382 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:01:51.619013 kubelet[2786]: E1030 00:01:51.618230 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:01:51.834335 containerd[1621]: time="2025-10-30T00:01:51.834240578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:51.838297 containerd[1621]: time="2025-10-30T00:01:51.838043298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:51.838297 containerd[1621]: time="2025-10-30T00:01:51.838201734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:01:51.840684 kubelet[2786]: E1030 00:01:51.838625 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:01:51.840684 kubelet[2786]: E1030 00:01:51.838736 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:01:51.840684 kubelet[2786]: E1030 00:01:51.839300 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2gnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qgxmh_calico-system(68bb771e-4dde-43d3-80f7-8e8958576aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:51.840953 containerd[1621]: time="2025-10-30T00:01:51.839252936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:01:51.841662 kubelet[2786]: E1030 00:01:51.841585 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:01:52.354306 containerd[1621]: time="2025-10-30T00:01:52.354246280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:01:52.392809 containerd[1621]: time="2025-10-30T00:01:52.392735914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:01:52.394125 kubelet[2786]: E1030 00:01:52.393061 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:52.394125 kubelet[2786]: E1030 00:01:52.393142 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:01:52.394125 kubelet[2786]: E1030 00:01:52.393295 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff2lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6h8nd_calico-apiserver(5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:01:52.394478 kubelet[2786]: E1030 00:01:52.394445 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:01:52.401455 containerd[1621]: time="2025-10-30T00:01:52.392776552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:01:53.048970 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). Oct 30 00:01:53.111024 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:53.112952 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:53.118808 systemd-logind[1592]: New session 20 of user core. Oct 30 00:01:53.127318 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:01:53.305663 sshd[5282]: Connection closed by 10.0.0.1 port 37248 Oct 30 00:01:53.306073 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:53.310462 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:37248.service: Deactivated successfully. Oct 30 00:01:53.313082 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:01:53.315054 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:01:53.316641 systemd-logind[1592]: Removed session 20. Oct 30 00:01:54.619140 kubelet[2786]: E1030 00:01:54.618739 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:01:58.321189 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:37284.service - OpenSSH per-connection server daemon (10.0.0.1:37284). Oct 30 00:01:58.389693 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 37284 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:01:58.391851 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:58.397136 systemd-logind[1592]: New session 21 of user core. Oct 30 00:01:58.407403 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:01:58.539299 sshd[5301]: Connection closed by 10.0.0.1 port 37284 Oct 30 00:01:58.539676 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:58.545793 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:37284.service: Deactivated successfully. Oct 30 00:01:58.548529 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:01:58.549523 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:01:58.551349 systemd-logind[1592]: Removed session 21. Oct 30 00:01:59.618712 containerd[1621]: time="2025-10-30T00:01:59.618639754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:00.143620 containerd[1621]: time="2025-10-30T00:02:00.143533048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:00.159767 containerd[1621]: time="2025-10-30T00:02:00.159670140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:00.159907 containerd[1621]: time="2025-10-30T00:02:00.159748261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:00.160134 kubelet[2786]: E1030 00:02:00.160045 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:00.160494 kubelet[2786]: E1030 00:02:00.160141 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:00.160494 kubelet[2786]: E1030 00:02:00.160299 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhffl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6vcm7_calico-apiserver(0bb0dc31-1db0-483e-b0fa-e4d89369c901): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:00.161551 kubelet[2786]: E1030 00:02:00.161493 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:02:01.618759 containerd[1621]: time="2025-10-30T00:02:01.618698562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:02:02.219391 containerd[1621]: time="2025-10-30T00:02:02.219297699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:02.246726 containerd[1621]: time="2025-10-30T00:02:02.246620690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:02:02.246914 containerd[1621]: time="2025-10-30T00:02:02.246748616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:02.247005 kubelet[2786]: E1030 00:02:02.246946 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:02:02.247490 kubelet[2786]: E1030 00:02:02.247011 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:02:02.247490 kubelet[2786]: E1030 00:02:02.247195 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwt78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-75c88c4ddc-bshzv_calico-system(1f252106-e865-42bc-bcfa-ce876455a870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:02.248520 kubelet[2786]: E1030 00:02:02.248446 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:02:02.618439 kubelet[2786]: E1030 00:02:02.618261 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:02:03.552706 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:50786.service - OpenSSH per-connection server daemon (10.0.0.1:50786). Oct 30 00:02:03.605123 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 50786 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:03.606693 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:03.611417 systemd-logind[1592]: New session 22 of user core. Oct 30 00:02:03.620386 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:02:03.754914 sshd[5318]: Connection closed by 10.0.0.1 port 50786 Oct 30 00:02:03.755376 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:03.761416 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:50786.service: Deactivated successfully. Oct 30 00:02:03.764573 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:02:03.765674 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:02:03.768024 systemd-logind[1592]: Removed session 22. Oct 30 00:02:05.620909 containerd[1621]: time="2025-10-30T00:02:05.620690784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:02:05.623131 kubelet[2786]: E1030 00:02:05.622262 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:02:06.159385 containerd[1621]: time="2025-10-30T00:02:06.159290869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:06.160679 containerd[1621]: time="2025-10-30T00:02:06.160633159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:02:06.160767 containerd[1621]: time="2025-10-30T00:02:06.160676643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:02:06.160984 kubelet[2786]: E1030 00:02:06.160910 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:02:06.161057 kubelet[2786]: E1030 00:02:06.160985 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:02:06.161243 kubelet[2786]: E1030 00:02:06.161140 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78f9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:06.163339 containerd[1621]: time="2025-10-30T00:02:06.163304995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:02:06.511664 containerd[1621]: time="2025-10-30T00:02:06.511580547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:06.581894 containerd[1621]: time="2025-10-30T00:02:06.581794633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:02:06.582116 containerd[1621]: time="2025-10-30T00:02:06.581873824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:02:06.582258 kubelet[2786]: E1030 00:02:06.582199 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:02:06.582330 kubelet[2786]: E1030 00:02:06.582260 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:02:06.582478 kubelet[2786]: E1030 00:02:06.582424 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78f9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kjkt6_calico-system(68445617-ec60-49e4-ab10-bde455e7ecc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:06.583694 kubelet[2786]: E1030 00:02:06.583635 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:02:06.618263 kubelet[2786]: E1030 00:02:06.618196 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:02:08.776916 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:50858.service - OpenSSH per-connection server daemon (10.0.0.1:50858). Oct 30 00:02:08.852672 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 50858 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:08.854638 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:08.860055 systemd-logind[1592]: New session 23 of user core. Oct 30 00:02:08.875386 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:02:09.006375 sshd[5336]: Connection closed by 10.0.0.1 port 50858 Oct 30 00:02:09.006751 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:09.017124 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:50858.service: Deactivated successfully. Oct 30 00:02:09.019167 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:02:09.020175 systemd-logind[1592]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:02:09.022829 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:50862.service - OpenSSH per-connection server daemon (10.0.0.1:50862). Oct 30 00:02:09.023867 systemd-logind[1592]: Removed session 23. Oct 30 00:02:09.082404 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 50862 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:09.084815 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:09.091840 systemd-logind[1592]: New session 24 of user core. Oct 30 00:02:09.102276 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:02:09.642468 sshd[5352]: Connection closed by 10.0.0.1 port 50862 Oct 30 00:02:09.643196 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:09.657825 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:50862.service: Deactivated successfully. Oct 30 00:02:09.660177 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:02:09.661170 systemd-logind[1592]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:02:09.664356 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:50876.service - OpenSSH per-connection server daemon (10.0.0.1:50876). Oct 30 00:02:09.665265 systemd-logind[1592]: Removed session 24. Oct 30 00:02:09.731262 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 50876 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:09.733139 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:09.738915 systemd-logind[1592]: New session 25 of user core. Oct 30 00:02:09.746283 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 00:02:10.298238 sshd[5366]: Connection closed by 10.0.0.1 port 50876 Oct 30 00:02:10.299359 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:10.313929 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:50876.service: Deactivated successfully. Oct 30 00:02:10.317750 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 00:02:10.319316 systemd-logind[1592]: Session 25 logged out. Waiting for processes to exit. Oct 30 00:02:10.325329 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:60542.service - OpenSSH per-connection server daemon (10.0.0.1:60542). Oct 30 00:02:10.327406 systemd-logind[1592]: Removed session 25. Oct 30 00:02:10.379630 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 60542 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:10.381167 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:10.386383 systemd-logind[1592]: New session 26 of user core. Oct 30 00:02:10.401266 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 30 00:02:11.410970 sshd[5393]: Connection closed by 10.0.0.1 port 60542 Oct 30 00:02:11.411399 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:11.422271 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:60542.service: Deactivated successfully. Oct 30 00:02:11.424362 systemd[1]: session-26.scope: Deactivated successfully. Oct 30 00:02:11.425195 systemd-logind[1592]: Session 26 logged out. Waiting for processes to exit. Oct 30 00:02:11.428010 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:60550.service - OpenSSH per-connection server daemon (10.0.0.1:60550). Oct 30 00:02:11.429467 systemd-logind[1592]: Removed session 26. Oct 30 00:02:11.491672 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 60550 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:11.493660 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:11.498960 systemd-logind[1592]: New session 27 of user core. Oct 30 00:02:11.505363 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 30 00:02:11.624025 sshd[5408]: Connection closed by 10.0.0.1 port 60550 Oct 30 00:02:11.624391 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:11.630060 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:60550.service: Deactivated successfully. Oct 30 00:02:11.632586 systemd[1]: session-27.scope: Deactivated successfully. Oct 30 00:02:11.633789 systemd-logind[1592]: Session 27 logged out. Waiting for processes to exit. Oct 30 00:02:11.635623 systemd-logind[1592]: Removed session 27. Oct 30 00:02:11.893324 containerd[1621]: time="2025-10-30T00:02:11.893234863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a7464cc94642f7dbce593d4fdcc4c7ea6ef8132ce35abf83236b5f05337efa0\" id:\"7bc4f9e6e44e51eacf1c51abb695772a9931b519fb5737e188ca9f475baf8ee7\" pid:5433 exited_at:{seconds:1761782531 nanos:892682433}" Oct 30 00:02:13.618988 kubelet[2786]: E1030 00:02:13.618910 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:02:15.618842 kubelet[2786]: E1030 00:02:15.618472 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:02:16.638694 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:60564.service - OpenSSH per-connection server daemon (10.0.0.1:60564). Oct 30 00:02:16.679144 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 60564 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:16.680710 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:16.685562 systemd-logind[1592]: New session 28 of user core. Oct 30 00:02:16.694307 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 30 00:02:16.808444 sshd[5449]: Connection closed by 10.0.0.1 port 60564 Oct 30 00:02:16.808743 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:16.812367 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:60564.service: Deactivated successfully. Oct 30 00:02:16.814947 systemd[1]: session-28.scope: Deactivated successfully. Oct 30 00:02:16.816536 systemd-logind[1592]: Session 28 logged out. Waiting for processes to exit. Oct 30 00:02:16.818231 systemd-logind[1592]: Removed session 28. Oct 30 00:02:17.619331 kubelet[2786]: E1030 00:02:17.619221 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:02:18.618443 kubelet[2786]: E1030 00:02:18.618070 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa" Oct 30 00:02:18.618443 kubelet[2786]: E1030 00:02:18.618398 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:02:19.621271 kubelet[2786]: E1030 00:02:19.621211 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:02:20.617480 kubelet[2786]: E1030 00:02:20.617409 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:02:21.826472 systemd[1]: Started sshd@28-10.0.0.55:22-10.0.0.1:44486.service - OpenSSH per-connection server daemon (10.0.0.1:44486). Oct 30 00:02:21.885570 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 44486 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:21.887453 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:21.893782 systemd-logind[1592]: New session 29 of user core. Oct 30 00:02:21.902321 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 30 00:02:22.047812 sshd[5472]: Connection closed by 10.0.0.1 port 44486 Oct 30 00:02:22.048250 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:22.053486 systemd[1]: sshd@28-10.0.0.55:22-10.0.0.1:44486.service: Deactivated successfully. Oct 30 00:02:22.056650 systemd[1]: session-29.scope: Deactivated successfully. Oct 30 00:02:22.058158 systemd-logind[1592]: Session 29 logged out. Waiting for processes to exit. Oct 30 00:02:22.060974 systemd-logind[1592]: Removed session 29. Oct 30 00:02:26.619038 kubelet[2786]: E1030 00:02:26.618953 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c88c4ddc-bshzv" podUID="1f252106-e865-42bc-bcfa-ce876455a870" Oct 30 00:02:27.063083 systemd[1]: Started sshd@29-10.0.0.55:22-10.0.0.1:44494.service - OpenSSH per-connection server daemon (10.0.0.1:44494). Oct 30 00:02:27.146152 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 44494 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:27.148117 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:27.156265 systemd-logind[1592]: New session 30 of user core. Oct 30 00:02:27.161409 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 30 00:02:27.302815 sshd[5491]: Connection closed by 10.0.0.1 port 44494 Oct 30 00:02:27.303753 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:27.309998 systemd[1]: sshd@29-10.0.0.55:22-10.0.0.1:44494.service: Deactivated successfully. Oct 30 00:02:27.314387 systemd[1]: session-30.scope: Deactivated successfully. Oct 30 00:02:27.315690 systemd-logind[1592]: Session 30 logged out. Waiting for processes to exit. Oct 30 00:02:27.318122 systemd-logind[1592]: Removed session 30. Oct 30 00:02:27.618551 kubelet[2786]: E1030 00:02:27.618403 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6vcm7" podUID="0bb0dc31-1db0-483e-b0fa-e4d89369c901" Oct 30 00:02:31.621133 containerd[1621]: time="2025-10-30T00:02:31.621067776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:02:31.621727 kubelet[2786]: E1030 00:02:31.621564 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qgxmh" podUID="68bb771e-4dde-43d3-80f7-8e8958576aed" Oct 30 00:02:31.998040 containerd[1621]: time="2025-10-30T00:02:31.997965841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:31.999458 containerd[1621]: time="2025-10-30T00:02:31.999401512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:02:31.999530 containerd[1621]: time="2025-10-30T00:02:31.999456477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:02:31.999776 kubelet[2786]: E1030 00:02:31.999697 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:31.999776 kubelet[2786]: E1030 00:02:31.999768 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:32.000001 kubelet[2786]: E1030 00:02:31.999935 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:409e14720a524b50ad4f5846391334f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:32.002551 containerd[1621]: time="2025-10-30T00:02:32.002449220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:02:32.320197 systemd[1]: Started sshd@30-10.0.0.55:22-10.0.0.1:56792.service - OpenSSH per-connection server daemon (10.0.0.1:56792). Oct 30 00:02:32.381714 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 56792 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:32.383464 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:32.388823 systemd-logind[1592]: New session 31 of user core. Oct 30 00:02:32.390591 containerd[1621]: time="2025-10-30T00:02:32.390551363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:32.391932 containerd[1621]: time="2025-10-30T00:02:32.391893965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:02:32.392014 containerd[1621]: time="2025-10-30T00:02:32.391964330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:32.392210 kubelet[2786]: E1030 00:02:32.392147 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:32.392210 kubelet[2786]: E1030 00:02:32.392206 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:32.392412 kubelet[2786]: E1030 00:02:32.392369 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6bpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dd57745-5wrzz_calico-system(5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:32.393581 kubelet[2786]: E1030 00:02:32.393525 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dd57745-5wrzz" podUID="5d1c4010-0c3f-4d18-a5fa-aaa1ffb9aacc" Oct 30 00:02:32.394309 systemd[1]: Started session-31.scope - Session 31 of User core. Oct 30 00:02:32.532903 sshd[5507]: Connection closed by 10.0.0.1 port 56792 Oct 30 00:02:32.535382 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:32.541391 systemd[1]: sshd@30-10.0.0.55:22-10.0.0.1:56792.service: Deactivated successfully. Oct 30 00:02:32.544393 systemd[1]: session-31.scope: Deactivated successfully. Oct 30 00:02:32.545554 systemd-logind[1592]: Session 31 logged out. Waiting for processes to exit. Oct 30 00:02:32.547178 systemd-logind[1592]: Removed session 31. Oct 30 00:02:32.618468 kubelet[2786]: E1030 00:02:32.618331 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:02:32.620365 kubelet[2786]: E1030 00:02:32.620325 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kjkt6" podUID="68445617-ec60-49e4-ab10-bde455e7ecc9" Oct 30 00:02:33.619999 containerd[1621]: time="2025-10-30T00:02:33.619951723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:34.021612 containerd[1621]: time="2025-10-30T00:02:34.021539222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:34.109086 containerd[1621]: time="2025-10-30T00:02:34.109014710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:34.109170 containerd[1621]: time="2025-10-30T00:02:34.109081236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:34.109361 kubelet[2786]: E1030 00:02:34.109311 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:34.109850 kubelet[2786]: E1030 00:02:34.109376 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:34.109850 kubelet[2786]: E1030 00:02:34.109530 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff2lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-565d8bbfcd-6h8nd_calico-apiserver(5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:34.110738 kubelet[2786]: E1030 00:02:34.110700 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-565d8bbfcd-6h8nd" podUID="5d1e1dc8-5310-4db6-99a1-ad75bb29c5fa"