Oct 28 13:20:45.147543 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 11:22:35 -00 2025 Oct 28 13:20:45.147564 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:20:45.147575 kernel: BIOS-provided physical RAM map: Oct 28 13:20:45.147582 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 28 13:20:45.147598 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 28 13:20:45.147606 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 28 13:20:45.147614 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 28 13:20:45.147621 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 28 13:20:45.147628 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 28 13:20:45.147634 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 28 13:20:45.147644 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 28 13:20:45.147650 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 28 13:20:45.147657 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 28 13:20:45.147664 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 28 13:20:45.147672 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 28 13:20:45.147682 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 28 13:20:45.147690 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 28 13:20:45.147698 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 28 13:20:45.147705 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 28 13:20:45.147712 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 28 13:20:45.147719 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 28 13:20:45.147727 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 28 13:20:45.147734 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 28 13:20:45.147741 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 13:20:45.147748 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 28 13:20:45.147758 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 13:20:45.147765 kernel: NX (Execute Disable) protection: active Oct 28 13:20:45.147772 kernel: APIC: Static calls initialized Oct 28 13:20:45.147779 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Oct 28 13:20:45.147787 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Oct 28 13:20:45.147794 kernel: extended physical RAM map: Oct 28 13:20:45.147802 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 28 13:20:45.147809 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 28 13:20:45.147816 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 28 13:20:45.147824 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 28 13:20:45.147831 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 28 13:20:45.147841 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 28 13:20:45.147848 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 28 13:20:45.147855 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Oct 28 13:20:45.147874 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Oct 28 13:20:45.147886 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Oct 28 13:20:45.147896 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Oct 28 13:20:45.147903 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Oct 28 13:20:45.147911 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 28 13:20:45.147919 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 28 13:20:45.147926 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 28 13:20:45.147934 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 28 13:20:45.147942 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 28 13:20:45.147950 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 28 13:20:45.147959 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 28 13:20:45.147967 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 28 13:20:45.147974 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 28 13:20:45.147982 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 28 13:20:45.147990 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 28 13:20:45.147998 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 28 13:20:45.148005 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 13:20:45.148013 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 28 13:20:45.148020 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 13:20:45.148028 kernel: efi: EFI v2.7 by EDK II Oct 28 13:20:45.148036 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 28 13:20:45.148046 kernel: random: crng init done Oct 28 13:20:45.148054 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 28 13:20:45.148062 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 28 13:20:45.148069 kernel: secureboot: Secure boot disabled Oct 28 13:20:45.148077 kernel: SMBIOS 2.8 present. Oct 28 13:20:45.148085 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 28 13:20:45.148092 kernel: DMI: Memory slots populated: 1/1 Oct 28 13:20:45.148100 kernel: Hypervisor detected: KVM Oct 28 13:20:45.148108 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 28 13:20:45.148115 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 28 13:20:45.148123 kernel: kvm-clock: using sched offset of 3914344533 cycles Oct 28 13:20:45.148133 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 28 13:20:45.148142 kernel: tsc: Detected 2794.750 MHz processor Oct 28 13:20:45.148150 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 28 13:20:45.148158 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 28 13:20:45.148166 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 28 13:20:45.148174 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 28 13:20:45.148182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 28 13:20:45.148190 kernel: Using GB pages for direct mapping Oct 28 13:20:45.148200 kernel: ACPI: Early table checksum verification disabled Oct 28 13:20:45.148208 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 28 13:20:45.148216 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 28 13:20:45.148224 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:45.148232 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:45.148240 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 28 13:20:45.148248 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:45.148258 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:45.148266 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:45.148274 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:45.148282 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 28 13:20:45.148290 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 28 13:20:45.148298 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 28 13:20:45.148307 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 28 13:20:45.148317 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 28 13:20:45.148325 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 28 13:20:45.148335 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 28 13:20:45.148344 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 28 13:20:45.148354 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 28 13:20:45.148362 kernel: No NUMA configuration found Oct 28 13:20:45.148370 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 28 13:20:45.148380 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 28 13:20:45.148388 kernel: Zone ranges: Oct 28 13:20:45.148396 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 28 13:20:45.148404 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 28 13:20:45.148412 kernel: Normal empty Oct 28 13:20:45.148420 kernel: Device empty Oct 28 13:20:45.148428 kernel: Movable zone start for each node Oct 28 13:20:45.148436 kernel: Early memory node ranges Oct 28 13:20:45.148446 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 28 13:20:45.148454 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 28 13:20:45.148462 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 28 13:20:45.148469 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 28 13:20:45.148477 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 28 13:20:45.148485 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 28 13:20:45.148493 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 28 13:20:45.148501 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 28 13:20:45.148511 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 28 13:20:45.148527 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 13:20:45.148542 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 28 13:20:45.148552 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 28 13:20:45.148560 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 13:20:45.148568 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 28 13:20:45.148576 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 28 13:20:45.148584 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 28 13:20:45.148593 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 28 13:20:45.148603 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 28 13:20:45.148612 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 28 13:20:45.148620 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 28 13:20:45.148628 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 28 13:20:45.148639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 28 13:20:45.148647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 28 13:20:45.148655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 28 13:20:45.148663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 28 13:20:45.148672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 28 13:20:45.148680 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 28 13:20:45.148688 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 28 13:20:45.148696 kernel: TSC deadline timer available Oct 28 13:20:45.148706 kernel: CPU topo: Max. logical packages: 1 Oct 28 13:20:45.148715 kernel: CPU topo: Max. logical dies: 1 Oct 28 13:20:45.148723 kernel: CPU topo: Max. dies per package: 1 Oct 28 13:20:45.148731 kernel: CPU topo: Max. threads per core: 1 Oct 28 13:20:45.148739 kernel: CPU topo: Num. cores per package: 4 Oct 28 13:20:45.148748 kernel: CPU topo: Num. threads per package: 4 Oct 28 13:20:45.148756 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 28 13:20:45.148766 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 28 13:20:45.148774 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 28 13:20:45.148782 kernel: kvm-guest: setup PV sched yield Oct 28 13:20:45.148791 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 28 13:20:45.148799 kernel: Booting paravirtualized kernel on KVM Oct 28 13:20:45.148807 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 28 13:20:45.148816 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 28 13:20:45.148824 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 28 13:20:45.148835 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 28 13:20:45.148843 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 28 13:20:45.148851 kernel: kvm-guest: PV spinlocks enabled Oct 28 13:20:45.148860 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 28 13:20:45.148880 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:20:45.148889 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 13:20:45.148900 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 13:20:45.148908 kernel: Fallback order for Node 0: 0 Oct 28 13:20:45.148917 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 28 13:20:45.148925 kernel: Policy zone: DMA32 Oct 28 13:20:45.148933 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 13:20:45.148941 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 13:20:45.148950 kernel: ftrace: allocating 40092 entries in 157 pages Oct 28 13:20:45.148960 kernel: ftrace: allocated 157 pages with 5 groups Oct 28 13:20:45.148968 kernel: Dynamic Preempt: voluntary Oct 28 13:20:45.148976 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 13:20:45.148985 kernel: rcu: RCU event tracing is enabled. Oct 28 13:20:45.148994 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 13:20:45.149002 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 13:20:45.149011 kernel: Rude variant of Tasks RCU enabled. Oct 28 13:20:45.149019 kernel: Tracing variant of Tasks RCU enabled. Oct 28 13:20:45.149029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 13:20:45.149037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 13:20:45.149046 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:20:45.149054 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:20:45.149063 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:20:45.149071 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 28 13:20:45.149079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 13:20:45.149089 kernel: Console: colour dummy device 80x25 Oct 28 13:20:45.149098 kernel: printk: legacy console [ttyS0] enabled Oct 28 13:20:45.149114 kernel: ACPI: Core revision 20240827 Oct 28 13:20:45.149131 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 28 13:20:45.149140 kernel: APIC: Switch to symmetric I/O mode setup Oct 28 13:20:45.149148 kernel: x2apic enabled Oct 28 13:20:45.149157 kernel: APIC: Switched APIC routing to: physical x2apic Oct 28 13:20:45.149165 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 28 13:20:45.149176 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 28 13:20:45.149184 kernel: kvm-guest: setup PV IPIs Oct 28 13:20:45.149192 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 28 13:20:45.149201 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 28 13:20:45.149210 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 28 13:20:45.149218 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 28 13:20:45.149226 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 28 13:20:45.149237 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 28 13:20:45.149245 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 28 13:20:45.149253 kernel: Spectre V2 : Mitigation: Retpolines Oct 28 13:20:45.149262 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 28 13:20:45.149270 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 28 13:20:45.149279 kernel: active return thunk: retbleed_return_thunk Oct 28 13:20:45.149287 kernel: RETBleed: Mitigation: untrained return thunk Oct 28 13:20:45.149298 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 28 13:20:45.149306 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 28 13:20:45.149315 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 28 13:20:45.149326 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 28 13:20:45.149335 kernel: active return thunk: srso_return_thunk Oct 28 13:20:45.149345 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 28 13:20:45.149356 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 28 13:20:45.149364 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 28 13:20:45.149373 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 28 13:20:45.149381 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 28 13:20:45.149389 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 28 13:20:45.149398 kernel: Freeing SMP alternatives memory: 32K Oct 28 13:20:45.149406 kernel: pid_max: default: 32768 minimum: 301 Oct 28 13:20:45.149416 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 13:20:45.149424 kernel: landlock: Up and running. Oct 28 13:20:45.149434 kernel: SELinux: Initializing. Oct 28 13:20:45.149445 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 13:20:45.149455 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 13:20:45.149466 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 28 13:20:45.149477 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 28 13:20:45.149490 kernel: ... version: 0 Oct 28 13:20:45.149500 kernel: ... bit width: 48 Oct 28 13:20:45.149510 kernel: ... generic registers: 6 Oct 28 13:20:45.149530 kernel: ... value mask: 0000ffffffffffff Oct 28 13:20:45.149540 kernel: ... max period: 00007fffffffffff Oct 28 13:20:45.149551 kernel: ... fixed-purpose events: 0 Oct 28 13:20:45.149561 kernel: ... event mask: 000000000000003f Oct 28 13:20:45.149571 kernel: signal: max sigframe size: 1776 Oct 28 13:20:45.149584 kernel: rcu: Hierarchical SRCU implementation. Oct 28 13:20:45.149595 kernel: rcu: Max phase no-delay instances is 400. Oct 28 13:20:45.149605 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 13:20:45.149616 kernel: smp: Bringing up secondary CPUs ... Oct 28 13:20:45.149627 kernel: smpboot: x86: Booting SMP configuration: Oct 28 13:20:45.149637 kernel: .... node #0, CPUs: #1 #2 #3 Oct 28 13:20:45.149645 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 13:20:45.149656 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 28 13:20:45.149665 kernel: Memory: 2445196K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15960K init, 2084K bss, 114668K reserved, 0K cma-reserved) Oct 28 13:20:45.149674 kernel: devtmpfs: initialized Oct 28 13:20:45.149682 kernel: x86/mm: Memory block size: 128MB Oct 28 13:20:45.149690 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 28 13:20:45.149699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 28 13:20:45.149707 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 28 13:20:45.149718 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 28 13:20:45.149727 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 28 13:20:45.149736 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 28 13:20:45.149745 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 13:20:45.149754 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 13:20:45.149762 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 13:20:45.149771 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 13:20:45.149782 kernel: audit: initializing netlink subsys (disabled) Oct 28 13:20:45.149791 kernel: audit: type=2000 audit(1761657643.514:1): state=initialized audit_enabled=0 res=1 Oct 28 13:20:45.149799 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 13:20:45.149808 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 28 13:20:45.149817 kernel: cpuidle: using governor menu Oct 28 13:20:45.149826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 13:20:45.149835 kernel: dca service started, version 1.12.1 Oct 28 13:20:45.149845 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 28 13:20:45.149854 kernel: PCI: Using configuration type 1 for base access Oct 28 13:20:45.149876 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 28 13:20:45.149885 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 13:20:45.149893 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 13:20:45.149902 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 13:20:45.149910 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 13:20:45.149921 kernel: ACPI: Added _OSI(Module Device) Oct 28 13:20:45.149929 kernel: ACPI: Added _OSI(Processor Device) Oct 28 13:20:45.149937 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 13:20:45.149946 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 13:20:45.149954 kernel: ACPI: Interpreter enabled Oct 28 13:20:45.149962 kernel: ACPI: PM: (supports S0 S3 S5) Oct 28 13:20:45.149970 kernel: ACPI: Using IOAPIC for interrupt routing Oct 28 13:20:45.149981 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 28 13:20:45.149989 kernel: PCI: Using E820 reservations for host bridge windows Oct 28 13:20:45.149997 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 28 13:20:45.150006 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 13:20:45.150235 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 13:20:45.150414 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 28 13:20:45.150597 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 28 13:20:45.150608 kernel: PCI host bridge to bus 0000:00 Oct 28 13:20:45.150774 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 28 13:20:45.150946 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 28 13:20:45.151099 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 28 13:20:45.151252 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 28 13:20:45.151407 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 28 13:20:45.151566 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 28 13:20:45.151719 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 13:20:45.151918 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 28 13:20:45.152096 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 28 13:20:45.152267 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 28 13:20:45.152441 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 28 13:20:45.152615 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 28 13:20:45.152780 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 28 13:20:45.153128 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 13:20:45.153503 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 28 13:20:45.153780 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 28 13:20:45.153985 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 28 13:20:45.154160 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 13:20:45.154327 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 28 13:20:45.154493 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 28 13:20:45.154667 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 28 13:20:45.154845 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 13:20:45.155030 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 28 13:20:45.155198 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 28 13:20:45.155362 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 28 13:20:45.155538 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 28 13:20:45.155715 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 28 13:20:45.155896 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 28 13:20:45.156070 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 28 13:20:45.156234 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 28 13:20:45.156418 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 28 13:20:45.156619 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 28 13:20:45.156803 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 28 13:20:45.156816 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 28 13:20:45.156825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 28 13:20:45.156833 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 28 13:20:45.156841 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 28 13:20:45.156850 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 28 13:20:45.156876 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 28 13:20:45.156885 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 28 13:20:45.156894 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 28 13:20:45.156902 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 28 13:20:45.156910 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 28 13:20:45.156918 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 28 13:20:45.156927 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 28 13:20:45.156935 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 28 13:20:45.156946 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 28 13:20:45.156954 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 28 13:20:45.156962 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 28 13:20:45.156971 kernel: iommu: Default domain type: Translated Oct 28 13:20:45.156979 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 28 13:20:45.156987 kernel: efivars: Registered efivars operations Oct 28 13:20:45.156995 kernel: PCI: Using ACPI for IRQ routing Oct 28 13:20:45.157005 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 28 13:20:45.157014 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 28 13:20:45.157022 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 28 13:20:45.157030 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Oct 28 13:20:45.157039 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Oct 28 13:20:45.157047 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 28 13:20:45.157055 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 28 13:20:45.157065 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 28 13:20:45.157074 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 28 13:20:45.157245 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 28 13:20:45.157408 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 28 13:20:45.157590 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 28 13:20:45.157602 kernel: vgaarb: loaded Oct 28 13:20:45.157614 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 28 13:20:45.157622 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 28 13:20:45.157631 kernel: clocksource: Switched to clocksource kvm-clock Oct 28 13:20:45.157639 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 13:20:45.157647 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 13:20:45.157656 kernel: pnp: PnP ACPI init Oct 28 13:20:45.157846 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 28 13:20:45.157876 kernel: pnp: PnP ACPI: found 6 devices Oct 28 13:20:45.157886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 28 13:20:45.157895 kernel: NET: Registered PF_INET protocol family Oct 28 13:20:45.157904 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 13:20:45.157913 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 13:20:45.157921 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 13:20:45.157930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 13:20:45.157941 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 13:20:45.157950 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 13:20:45.157958 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 13:20:45.157967 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 13:20:45.157975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 13:20:45.157984 kernel: NET: Registered PF_XDP protocol family Oct 28 13:20:45.158160 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 28 13:20:45.158330 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 28 13:20:45.158483 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 28 13:20:45.158647 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 28 13:20:45.158799 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 28 13:20:45.158970 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 28 13:20:45.159122 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 28 13:20:45.159276 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 28 13:20:45.159288 kernel: PCI: CLS 0 bytes, default 64 Oct 28 13:20:45.159297 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 28 13:20:45.159309 kernel: Initialise system trusted keyrings Oct 28 13:20:45.159320 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 13:20:45.159329 kernel: Key type asymmetric registered Oct 28 13:20:45.159337 kernel: Asymmetric key parser 'x509' registered Oct 28 13:20:45.159346 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 28 13:20:45.159355 kernel: io scheduler mq-deadline registered Oct 28 13:20:45.159364 kernel: io scheduler kyber registered Oct 28 13:20:45.159372 kernel: io scheduler bfq registered Oct 28 13:20:45.159383 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 28 13:20:45.159392 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 28 13:20:45.159401 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 28 13:20:45.159410 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 28 13:20:45.159418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 13:20:45.159427 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 28 13:20:45.159436 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 28 13:20:45.159446 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 28 13:20:45.159455 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 28 13:20:45.159660 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 28 13:20:45.159674 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 28 13:20:45.159832 kernel: rtc_cmos 00:04: registered as rtc0 Oct 28 13:20:45.160008 kernel: rtc_cmos 00:04: setting system clock to 2025-10-28T13:20:43 UTC (1761657643) Oct 28 13:20:45.160166 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 28 13:20:45.160181 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 28 13:20:45.160189 kernel: efifb: probing for efifb Oct 28 13:20:45.160198 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 28 13:20:45.160207 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 28 13:20:45.160216 kernel: efifb: scrolling: redraw Oct 28 13:20:45.160224 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 28 13:20:45.160233 kernel: Console: switching to colour frame buffer device 160x50 Oct 28 13:20:45.160244 kernel: fb0: EFI VGA frame buffer device Oct 28 13:20:45.160252 kernel: pstore: Using crash dump compression: deflate Oct 28 13:20:45.160261 kernel: pstore: Registered efi_pstore as persistent store backend Oct 28 13:20:45.160270 kernel: NET: Registered PF_INET6 protocol family Oct 28 13:20:45.160278 kernel: Segment Routing with IPv6 Oct 28 13:20:45.160287 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 13:20:45.160295 kernel: NET: Registered PF_PACKET protocol family Oct 28 13:20:45.160306 kernel: Key type dns_resolver registered Oct 28 13:20:45.160315 kernel: IPI shorthand broadcast: enabled Oct 28 13:20:45.160323 kernel: sched_clock: Marking stable (1027001978, 284971883)->(1433384636, -121410775) Oct 28 13:20:45.160332 kernel: registered taskstats version 1 Oct 28 13:20:45.160341 kernel: Loading compiled-in X.509 certificates Oct 28 13:20:45.160350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: cdff28e8ecdc0a80eff4a5776c5a29d2ceff67c8' Oct 28 13:20:45.160358 kernel: Demotion targets for Node 0: null Oct 28 13:20:45.160368 kernel: Key type .fscrypt registered Oct 28 13:20:45.160377 kernel: Key type fscrypt-provisioning registered Oct 28 13:20:45.160385 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 13:20:45.160394 kernel: ima: Allocated hash algorithm: sha1 Oct 28 13:20:45.160403 kernel: ima: No architecture policies found Oct 28 13:20:45.160411 kernel: clk: Disabling unused clocks Oct 28 13:20:45.160420 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 28 13:20:45.160430 kernel: Write protecting the kernel read-only data: 40960k Oct 28 13:20:45.160439 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 28 13:20:45.160447 kernel: Run /init as init process Oct 28 13:20:45.160456 kernel: with arguments: Oct 28 13:20:45.160465 kernel: /init Oct 28 13:20:45.160473 kernel: with environment: Oct 28 13:20:45.160482 kernel: HOME=/ Oct 28 13:20:45.160490 kernel: TERM=linux Oct 28 13:20:45.160501 kernel: SCSI subsystem initialized Oct 28 13:20:45.160509 kernel: libata version 3.00 loaded. Oct 28 13:20:45.160703 kernel: ahci 0000:00:1f.2: version 3.0 Oct 28 13:20:45.160716 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 28 13:20:45.160899 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 28 13:20:45.161066 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 28 13:20:45.161236 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 28 13:20:45.161446 kernel: scsi host0: ahci Oct 28 13:20:45.161634 kernel: scsi host1: ahci Oct 28 13:20:45.161809 kernel: scsi host2: ahci Oct 28 13:20:45.162002 kernel: scsi host3: ahci Oct 28 13:20:45.162179 kernel: scsi host4: ahci Oct 28 13:20:45.162361 kernel: scsi host5: ahci Oct 28 13:20:45.162376 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 28 13:20:45.162385 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 28 13:20:45.162394 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 28 13:20:45.162403 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 28 13:20:45.162412 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 28 13:20:45.162423 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 28 13:20:45.162432 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:45.162441 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:45.162450 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 28 13:20:45.162458 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:45.162467 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:45.162476 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:45.162486 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 13:20:45.162495 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 28 13:20:45.162503 kernel: ata3.00: applying bridge limits Oct 28 13:20:45.162512 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 13:20:45.162529 kernel: ata3.00: configured for UDMA/100 Oct 28 13:20:45.162727 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 28 13:20:45.162930 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 28 13:20:45.163103 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 13:20:45.163115 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 13:20:45.163124 kernel: GPT:16515071 != 27000831 Oct 28 13:20:45.163132 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 13:20:45.163141 kernel: GPT:16515071 != 27000831 Oct 28 13:20:45.163149 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 13:20:45.163161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 13:20:45.163170 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163352 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 28 13:20:45.163367 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 28 13:20:45.163556 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 28 13:20:45.163569 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 13:20:45.163581 kernel: device-mapper: uevent: version 1.0.3 Oct 28 13:20:45.163590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 13:20:45.163598 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 28 13:20:45.163607 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163616 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163626 kernel: raid6: avx2x4 gen() 29912 MB/s Oct 28 13:20:45.163635 kernel: raid6: avx2x2 gen() 31268 MB/s Oct 28 13:20:45.163643 kernel: raid6: avx2x1 gen() 25939 MB/s Oct 28 13:20:45.163654 kernel: raid6: using algorithm avx2x2 gen() 31268 MB/s Oct 28 13:20:45.163663 kernel: raid6: .... xor() 19963 MB/s, rmw enabled Oct 28 13:20:45.163671 kernel: raid6: using avx2x2 recovery algorithm Oct 28 13:20:45.163680 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163689 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163697 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163706 kernel: xor: automatically using best checksumming function avx Oct 28 13:20:45.163716 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163725 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 13:20:45.163734 kernel: BTRFS: device fsid af35db37-e08e-4bd7-9f3a-b576d01d2613 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (175) Oct 28 13:20:45.163743 kernel: BTRFS info (device dm-0): first mount of filesystem af35db37-e08e-4bd7-9f3a-b576d01d2613 Oct 28 13:20:45.163751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:45.163760 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 13:20:45.163769 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 13:20:45.163778 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:45.163789 kernel: loop: module loaded Oct 28 13:20:45.163797 kernel: loop0: detected capacity change from 0 to 100120 Oct 28 13:20:45.163806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 13:20:45.163816 systemd[1]: Successfully made /usr/ read-only. Oct 28 13:20:45.163828 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 13:20:45.163837 systemd[1]: Detected virtualization kvm. Oct 28 13:20:45.163849 systemd[1]: Detected architecture x86-64. Oct 28 13:20:45.163858 systemd[1]: Running in initrd. Oct 28 13:20:45.163882 systemd[1]: No hostname configured, using default hostname. Oct 28 13:20:45.163892 systemd[1]: Hostname set to . Oct 28 13:20:45.163901 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 13:20:45.163910 systemd[1]: Queued start job for default target initrd.target. Oct 28 13:20:45.163921 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 13:20:45.163931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:20:45.163940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:20:45.163950 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 13:20:45.163959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 13:20:45.163969 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 13:20:45.163981 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 13:20:45.163990 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:20:45.163999 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:20:45.164009 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 13:20:45.164018 systemd[1]: Reached target paths.target - Path Units. Oct 28 13:20:45.164027 systemd[1]: Reached target slices.target - Slice Units. Oct 28 13:20:45.164036 systemd[1]: Reached target swap.target - Swaps. Oct 28 13:20:45.164048 systemd[1]: Reached target timers.target - Timer Units. Oct 28 13:20:45.164057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 13:20:45.164066 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 13:20:45.164075 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 13:20:45.164085 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 13:20:45.164094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:20:45.164105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 13:20:45.164115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:20:45.164124 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 13:20:45.164133 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 13:20:45.164142 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 13:20:45.164151 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 13:20:45.164161 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 13:20:45.164173 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 13:20:45.164182 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 13:20:45.164191 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 13:20:45.164200 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 13:20:45.164209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:45.164221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:20:45.164231 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 13:20:45.164240 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 13:20:45.164250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 13:20:45.164259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:20:45.164292 systemd-journald[310]: Collecting audit messages is disabled. Oct 28 13:20:45.164313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 13:20:45.164324 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 13:20:45.164335 kernel: Bridge firewalling registered Oct 28 13:20:45.164344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 13:20:45.164354 systemd-journald[310]: Journal started Oct 28 13:20:45.164373 systemd-journald[310]: Runtime Journal (/run/log/journal/99c55bf2a8b140c0937ba77261424490) is 6M, max 48.1M, 42.1M free. Oct 28 13:20:45.160569 systemd-modules-load[312]: Inserted module 'br_netfilter' Oct 28 13:20:45.170709 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 13:20:45.170833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:20:45.174383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:20:45.175412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 13:20:45.190155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:45.196037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 13:20:45.199349 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 13:20:45.205455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:20:45.207508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:20:45.210026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 13:20:45.225603 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 13:20:45.230907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 13:20:45.261352 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:20:45.266272 systemd-resolved[343]: Positive Trust Anchors: Oct 28 13:20:45.266278 systemd-resolved[343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 13:20:45.266282 systemd-resolved[343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 13:20:45.266312 systemd-resolved[343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 13:20:45.300062 systemd-resolved[343]: Defaulting to hostname 'linux'. Oct 28 13:20:45.301154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 13:20:45.301583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:20:45.391913 kernel: Loading iSCSI transport class v2.0-870. Oct 28 13:20:45.406906 kernel: iscsi: registered transport (tcp) Oct 28 13:20:45.429307 kernel: iscsi: registered transport (qla4xxx) Oct 28 13:20:45.429348 kernel: QLogic iSCSI HBA Driver Oct 28 13:20:45.457199 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 13:20:45.483086 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:20:45.484741 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 13:20:45.543779 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 13:20:45.547091 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 13:20:45.551048 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 13:20:45.598660 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 13:20:45.602621 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:20:45.638162 systemd-udevd[595]: Using default interface naming scheme 'v257'. Oct 28 13:20:45.651253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:20:45.657351 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 13:20:45.683392 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 13:20:45.687815 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 13:20:45.693098 dracut-pre-trigger[670]: rd.md=0: removing MD RAID activation Oct 28 13:20:45.726029 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 13:20:45.731079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 13:20:45.743196 systemd-networkd[705]: lo: Link UP Oct 28 13:20:45.743203 systemd-networkd[705]: lo: Gained carrier Oct 28 13:20:45.743929 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 13:20:45.744568 systemd[1]: Reached target network.target - Network. Oct 28 13:20:45.819112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:20:45.822612 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 13:20:45.875090 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 13:20:45.887739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 13:20:45.907315 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 13:20:45.914964 kernel: cryptd: max_cpu_qlen set to 1000 Oct 28 13:20:45.937993 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 28 13:20:45.945880 kernel: AES CTR mode by8 optimization enabled Oct 28 13:20:45.953980 systemd-networkd[705]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:45.953990 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 13:20:45.955478 systemd-networkd[705]: eth0: Link UP Oct 28 13:20:45.955688 systemd-networkd[705]: eth0: Gained carrier Oct 28 13:20:45.955697 systemd-networkd[705]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:45.971353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 13:20:45.976056 systemd-networkd[705]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 13:20:45.983684 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 13:20:45.987621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:20:45.989471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:45.993291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:45.998753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:46.007595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:20:46.007713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:46.014208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:46.024350 disk-uuid[833]: Primary Header is updated. Oct 28 13:20:46.024350 disk-uuid[833]: Secondary Entries is updated. Oct 28 13:20:46.024350 disk-uuid[833]: Secondary Header is updated. Oct 28 13:20:46.027032 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 13:20:46.032440 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 13:20:46.035576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:20:46.037744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 13:20:46.046097 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 13:20:46.070421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:46.084157 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 13:20:47.073158 disk-uuid[840]: Warning: The kernel is still using the old partition table. Oct 28 13:20:47.073158 disk-uuid[840]: The new table will be used at the next reboot or after you Oct 28 13:20:47.073158 disk-uuid[840]: run partprobe(8) or kpartx(8) Oct 28 13:20:47.073158 disk-uuid[840]: The operation has completed successfully. Oct 28 13:20:47.083278 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 13:20:47.083421 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 13:20:47.089814 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 13:20:47.130897 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Oct 28 13:20:47.134214 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:47.134234 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:47.138158 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:20:47.138174 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:20:47.145902 kernel: BTRFS info (device vda6): last unmount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:47.146605 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 13:20:47.149427 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 13:20:47.255916 ignition[883]: Ignition 2.22.0 Oct 28 13:20:47.255928 ignition[883]: Stage: fetch-offline Oct 28 13:20:47.255969 ignition[883]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:47.255980 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:47.256061 ignition[883]: parsed url from cmdline: "" Oct 28 13:20:47.256065 ignition[883]: no config URL provided Oct 28 13:20:47.256070 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 13:20:47.256081 ignition[883]: no config at "/usr/lib/ignition/user.ign" Oct 28 13:20:47.256122 ignition[883]: op(1): [started] loading QEMU firmware config module Oct 28 13:20:47.256127 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 13:20:47.264198 ignition[883]: op(1): [finished] loading QEMU firmware config module Oct 28 13:20:47.346488 ignition[883]: parsing config with SHA512: 331bab40be48532d78b16d4c827c035c3b35079f6eb758e74eb27d7d4f8d392d90221773ab958080df421b248ed577eeb0b4ff2c73c0dab37b682b1c4366e178 Oct 28 13:20:47.352336 unknown[883]: fetched base config from "system" Oct 28 13:20:47.352349 unknown[883]: fetched user config from "qemu" Oct 28 13:20:47.352745 ignition[883]: fetch-offline: fetch-offline passed Oct 28 13:20:47.352801 ignition[883]: Ignition finished successfully Oct 28 13:20:47.357751 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 13:20:47.358068 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 13:20:47.359012 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 13:20:47.394730 ignition[893]: Ignition 2.22.0 Oct 28 13:20:47.394744 ignition[893]: Stage: kargs Oct 28 13:20:47.394932 ignition[893]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:47.394942 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:47.395726 ignition[893]: kargs: kargs passed Oct 28 13:20:47.395768 ignition[893]: Ignition finished successfully Oct 28 13:20:47.401481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 13:20:47.405660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 13:20:47.427987 systemd-networkd[705]: eth0: Gained IPv6LL Oct 28 13:20:47.439321 ignition[901]: Ignition 2.22.0 Oct 28 13:20:47.439342 ignition[901]: Stage: disks Oct 28 13:20:47.439506 ignition[901]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:47.439518 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:47.440290 ignition[901]: disks: disks passed Oct 28 13:20:47.443951 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 13:20:47.440333 ignition[901]: Ignition finished successfully Oct 28 13:20:47.444705 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 13:20:47.447091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 13:20:47.452198 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 13:20:47.452583 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 13:20:47.456641 systemd[1]: Reached target basic.target - Basic System. Oct 28 13:20:47.464613 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 13:20:47.508002 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 13:20:47.515831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 13:20:47.521007 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 13:20:47.633907 kernel: EXT4-fs (vda9): mounted filesystem 533620cd-204e-4567-a68e-d0b19b60f72c r/w with ordered data mode. Quota mode: none. Oct 28 13:20:47.634125 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 13:20:47.637139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 13:20:47.640209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 13:20:47.643017 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 13:20:47.645233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 13:20:47.645266 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 13:20:47.645291 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 13:20:47.665240 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 13:20:47.666729 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 13:20:47.675185 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Oct 28 13:20:47.675210 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:47.675221 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:47.679171 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:20:47.679210 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:20:47.680611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 13:20:47.728513 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 13:20:47.734579 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Oct 28 13:20:47.739820 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 13:20:47.745668 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 13:20:47.838102 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 13:20:47.841295 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 13:20:47.843812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 13:20:47.863930 kernel: BTRFS info (device vda6): last unmount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:47.880044 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 13:20:47.901085 ignition[1034]: INFO : Ignition 2.22.0 Oct 28 13:20:47.901085 ignition[1034]: INFO : Stage: mount Oct 28 13:20:47.903582 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:47.903582 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:47.903582 ignition[1034]: INFO : mount: mount passed Oct 28 13:20:47.903582 ignition[1034]: INFO : Ignition finished successfully Oct 28 13:20:47.912047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 13:20:47.916092 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 13:20:48.122747 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 13:20:48.124308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 13:20:48.144908 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Oct 28 13:20:48.144936 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:48.144948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:48.150027 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:20:48.150046 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:20:48.151806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 13:20:48.188064 ignition[1063]: INFO : Ignition 2.22.0 Oct 28 13:20:48.188064 ignition[1063]: INFO : Stage: files Oct 28 13:20:48.190476 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:48.190476 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:48.194562 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Oct 28 13:20:48.196994 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 13:20:48.196994 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 13:20:48.203951 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 13:20:48.206180 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 13:20:48.208270 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 13:20:48.206635 unknown[1063]: wrote ssh authorized keys file for user: core Oct 28 13:20:48.212230 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 28 13:20:48.215386 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 28 13:20:48.258337 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 13:20:48.339875 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 13:20:48.343158 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 13:20:48.365662 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 13:20:48.365662 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 13:20:48.365662 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 28 13:20:48.365662 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 28 13:20:48.365662 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 28 13:20:48.365662 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 28 13:20:48.829014 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 13:20:49.162813 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 28 13:20:49.162813 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 13:20:49.168888 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 13:20:49.197115 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 13:20:49.204580 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 13:20:49.207085 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 13:20:49.207085 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 13:20:49.207085 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 13:20:49.207085 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 13:20:49.207085 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 13:20:49.207085 ignition[1063]: INFO : files: files passed Oct 28 13:20:49.207085 ignition[1063]: INFO : Ignition finished successfully Oct 28 13:20:49.220036 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 13:20:49.226260 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 13:20:49.229808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 13:20:49.251760 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 13:20:49.251913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 13:20:49.258201 initrd-setup-root-after-ignition[1095]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 13:20:49.262487 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:20:49.262487 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:20:49.267520 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:20:49.271615 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 13:20:49.271853 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 13:20:49.273000 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 13:20:49.335728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 13:20:49.335856 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 13:20:49.339514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 13:20:49.342971 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 13:20:49.348022 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 13:20:49.348962 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 13:20:49.394987 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 13:20:49.398329 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 13:20:49.425119 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 13:20:49.425341 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:20:49.429051 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:20:49.430854 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 13:20:49.434355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 13:20:49.434480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 13:20:49.440251 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 13:20:49.442119 systemd[1]: Stopped target basic.target - Basic System. Oct 28 13:20:49.442817 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 13:20:49.443561 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 13:20:49.454357 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 13:20:49.454528 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 13:20:49.457887 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 13:20:49.458402 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 13:20:49.464171 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 13:20:49.467902 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 13:20:49.472492 systemd[1]: Stopped target swap.target - Swaps. Oct 28 13:20:49.474048 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 13:20:49.474189 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 13:20:49.477013 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:20:49.477589 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:20:49.484152 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 13:20:49.487733 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:20:49.489312 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 13:20:49.489448 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 13:20:49.497970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 13:20:49.498104 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 13:20:49.499813 systemd[1]: Stopped target paths.target - Path Units. Oct 28 13:20:49.504612 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 13:20:49.509999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:20:49.512137 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 13:20:49.515824 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 13:20:49.517243 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 13:20:49.517345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 13:20:49.521519 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 13:20:49.521600 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 13:20:49.525957 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 13:20:49.526094 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 13:20:49.527308 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 13:20:49.527461 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 13:20:49.535900 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 13:20:49.540190 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 13:20:49.544436 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 13:20:49.546138 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:20:49.550465 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 13:20:49.550638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:20:49.555719 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 13:20:49.555901 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 13:20:49.564679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 13:20:49.564819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 13:20:49.570484 ignition[1121]: INFO : Ignition 2.22.0 Oct 28 13:20:49.570484 ignition[1121]: INFO : Stage: umount Oct 28 13:20:49.570484 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:49.570484 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:49.570484 ignition[1121]: INFO : umount: umount passed Oct 28 13:20:49.570484 ignition[1121]: INFO : Ignition finished successfully Oct 28 13:20:49.574042 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 13:20:49.574179 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 13:20:49.576598 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 13:20:49.577049 systemd[1]: Stopped target network.target - Network. Oct 28 13:20:49.578131 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 13:20:49.578183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 13:20:49.580851 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 13:20:49.580921 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 13:20:49.581404 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 13:20:49.581460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 13:20:49.582218 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 13:20:49.582267 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 13:20:49.591706 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 13:20:49.594678 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 13:20:49.611968 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 13:20:49.612117 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 13:20:49.620075 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 13:20:49.620211 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 13:20:49.627185 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 13:20:49.629659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 13:20:49.629720 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:20:49.637164 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 13:20:49.638674 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 13:20:49.638742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 13:20:49.646141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 13:20:49.647674 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:20:49.651232 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 13:20:49.651298 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 13:20:49.656964 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:20:49.662232 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 13:20:49.663525 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 13:20:49.665754 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 13:20:49.665841 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 13:20:49.676622 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 13:20:49.683055 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:20:49.687596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 13:20:49.687656 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 13:20:49.687784 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 13:20:49.687822 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:20:49.692477 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 13:20:49.692536 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 13:20:49.697328 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 13:20:49.697383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 13:20:49.703494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 13:20:49.703551 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 13:20:49.712150 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 13:20:49.713896 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 13:20:49.713950 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:20:49.717795 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 13:20:49.717845 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:20:49.719563 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 28 13:20:49.719613 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:20:49.720447 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 13:20:49.720491 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:20:49.729199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:20:49.729249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:49.731970 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 13:20:49.745010 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 13:20:49.759363 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 13:20:49.759524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 13:20:49.763464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 13:20:49.768433 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 13:20:49.805053 systemd[1]: Switching root. Oct 28 13:20:49.842725 systemd-journald[310]: Journal stopped Oct 28 13:20:51.209730 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 28 13:20:51.209796 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 13:20:51.209811 kernel: SELinux: policy capability open_perms=1 Oct 28 13:20:51.209823 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 13:20:51.209839 kernel: SELinux: policy capability always_check_network=0 Oct 28 13:20:51.209855 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 13:20:51.209886 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 13:20:51.209898 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 13:20:51.209914 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 13:20:51.209926 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 13:20:51.209938 kernel: audit: type=1403 audit(1761657650.331:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 13:20:51.209955 systemd[1]: Successfully loaded SELinux policy in 66.398ms. Oct 28 13:20:51.209977 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.076ms. Oct 28 13:20:51.209995 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 13:20:51.210009 systemd[1]: Detected virtualization kvm. Oct 28 13:20:51.210023 systemd[1]: Detected architecture x86-64. Oct 28 13:20:51.210035 systemd[1]: Detected first boot. Oct 28 13:20:51.210052 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 13:20:51.210064 zram_generator::config[1166]: No configuration found. Oct 28 13:20:51.210080 kernel: Guest personality initialized and is inactive Oct 28 13:20:51.210094 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 28 13:20:51.210107 kernel: Initialized host personality Oct 28 13:20:51.210118 kernel: NET: Registered PF_VSOCK protocol family Oct 28 13:20:51.210131 systemd[1]: Populated /etc with preset unit settings. Oct 28 13:20:51.210144 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 13:20:51.210156 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 13:20:51.210170 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 13:20:51.210184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 13:20:51.210197 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 13:20:51.210210 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 13:20:51.210224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 13:20:51.210237 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 13:20:51.210251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 13:20:51.210267 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 13:20:51.210280 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 13:20:51.210294 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:20:51.210307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:20:51.210321 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 13:20:51.210334 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 13:20:51.210347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 13:20:51.210363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 13:20:51.210384 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 28 13:20:51.210397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:20:51.210410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:20:51.210422 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 13:20:51.210435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 13:20:51.210450 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 13:20:51.210464 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 13:20:51.210476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:20:51.210489 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 13:20:51.210502 systemd[1]: Reached target slices.target - Slice Units. Oct 28 13:20:51.210515 systemd[1]: Reached target swap.target - Swaps. Oct 28 13:20:51.210528 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 13:20:51.210542 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 13:20:51.210555 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 13:20:51.210567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:20:51.210580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 13:20:51.210593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:20:51.210606 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 13:20:51.210618 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 13:20:51.210632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 13:20:51.210647 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 13:20:51.210659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:51.210672 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 13:20:51.210684 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 13:20:51.210696 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 13:20:51.210710 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 13:20:51.210724 systemd[1]: Reached target machines.target - Containers. Oct 28 13:20:51.210737 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 13:20:51.210750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:51.210763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 13:20:51.210775 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 13:20:51.210788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:20:51.210801 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 13:20:51.210817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:20:51.210829 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 13:20:51.210842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:20:51.210855 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 13:20:51.210886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 13:20:51.210899 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 13:20:51.210912 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 13:20:51.210927 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 13:20:51.210940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:51.210953 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 13:20:51.210966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 13:20:51.210978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 13:20:51.210994 kernel: ACPI: bus type drm_connector registered Oct 28 13:20:51.211007 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 13:20:51.211021 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 13:20:51.211033 kernel: fuse: init (API version 7.41) Oct 28 13:20:51.211046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 13:20:51.211059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:51.211073 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 13:20:51.211086 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 13:20:51.211116 systemd-journald[1251]: Collecting audit messages is disabled. Oct 28 13:20:51.211139 systemd-journald[1251]: Journal started Oct 28 13:20:51.211164 systemd-journald[1251]: Runtime Journal (/run/log/journal/99c55bf2a8b140c0937ba77261424490) is 6M, max 48.1M, 42.1M free. Oct 28 13:20:50.892547 systemd[1]: Queued start job for default target multi-user.target. Oct 28 13:20:50.919081 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 13:20:50.919652 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 13:20:51.214017 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 13:20:51.216410 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 13:20:51.218425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 13:20:51.220623 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 13:20:51.222861 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 13:20:51.224919 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 13:20:51.227315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:20:51.229832 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 13:20:51.230063 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 13:20:51.232474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:20:51.232683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:20:51.235045 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 13:20:51.235250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 13:20:51.237470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:20:51.237682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:20:51.240358 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 13:20:51.240579 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 13:20:51.242688 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:20:51.242911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:20:51.245113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 13:20:51.247451 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:20:51.250654 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 13:20:51.253233 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 13:20:51.269182 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 13:20:51.271718 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 13:20:51.275141 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 13:20:51.278134 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 13:20:51.280060 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 13:20:51.280096 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 13:20:51.282984 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 13:20:51.285317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:51.293019 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 13:20:51.296545 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 13:20:51.298829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:20:51.299847 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 13:20:51.302096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 13:20:51.304061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:20:51.312035 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 13:20:51.315646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 13:20:51.321190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:20:51.321781 systemd-journald[1251]: Time spent on flushing to /var/log/journal/99c55bf2a8b140c0937ba77261424490 is 13.915ms for 1063 entries. Oct 28 13:20:51.321781 systemd-journald[1251]: System Journal (/var/log/journal/99c55bf2a8b140c0937ba77261424490) is 8M, max 163.5M, 155.5M free. Oct 28 13:20:51.345995 systemd-journald[1251]: Received client request to flush runtime journal. Oct 28 13:20:51.346038 kernel: loop1: detected capacity change from 0 to 110984 Oct 28 13:20:51.326294 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 13:20:51.328314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 13:20:51.331386 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 13:20:51.338271 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 13:20:51.342783 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 13:20:51.349078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:20:51.351662 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 13:20:51.366650 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 28 13:20:51.367027 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 28 13:20:51.371531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:20:51.375902 kernel: loop2: detected capacity change from 0 to 224512 Oct 28 13:20:51.374046 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 13:20:51.380231 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 13:20:51.411894 kernel: loop3: detected capacity change from 0 to 118328 Oct 28 13:20:51.418889 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 13:20:51.422766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 13:20:51.427097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 13:20:51.438692 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 13:20:51.440322 kernel: loop4: detected capacity change from 0 to 110984 Oct 28 13:20:51.449898 kernel: loop5: detected capacity change from 0 to 224512 Oct 28 13:20:51.451921 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Oct 28 13:20:51.451958 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Oct 28 13:20:51.461100 kernel: loop6: detected capacity change from 0 to 118328 Oct 28 13:20:51.460947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:20:51.470128 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 13:20:51.473756 (sd-merge)[1310]: Merged extensions into '/usr'. Oct 28 13:20:51.478487 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 13:20:51.478503 systemd[1]: Reloading... Oct 28 13:20:51.546897 zram_generator::config[1351]: No configuration found. Oct 28 13:20:51.581239 systemd-resolved[1308]: Positive Trust Anchors: Oct 28 13:20:51.581596 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 13:20:51.581650 systemd-resolved[1308]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 13:20:51.581722 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 13:20:51.586179 systemd-resolved[1308]: Defaulting to hostname 'linux'. Oct 28 13:20:51.728727 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 13:20:51.729124 systemd[1]: Reloading finished in 250 ms. Oct 28 13:20:51.759476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 13:20:51.761552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 13:20:51.763669 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 13:20:51.767961 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:20:51.792232 systemd[1]: Starting ensure-sysext.service... Oct 28 13:20:51.794921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 13:20:51.816086 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 13:20:51.816138 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 13:20:51.816533 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 13:20:51.816804 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 13:20:51.817733 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 13:20:51.818024 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Oct 28 13:20:51.818095 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Oct 28 13:20:51.823930 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 13:20:51.823943 systemd-tmpfiles[1383]: Skipping /boot Oct 28 13:20:51.824266 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Oct 28 13:20:51.824284 systemd[1]: Reloading... Oct 28 13:20:51.834463 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 13:20:51.834481 systemd-tmpfiles[1383]: Skipping /boot Oct 28 13:20:51.882905 zram_generator::config[1413]: No configuration found. Oct 28 13:20:52.054181 systemd[1]: Reloading finished in 229 ms. Oct 28 13:20:52.080429 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 13:20:52.107463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:20:52.118767 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:20:52.121708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 13:20:52.141463 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 13:20:52.144878 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 13:20:52.149140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:20:52.153311 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 13:20:52.161185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:52.161424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:52.163095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:20:52.167937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:20:52.171810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:20:52.173618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:52.173725 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:52.173813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:52.183201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:20:52.183434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:20:52.186334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:20:52.186647 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:20:52.190616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:52.192030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:52.192201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:52.192291 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:52.192390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:20:52.192484 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:52.193893 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 13:20:52.202763 systemd-udevd[1459]: Using default interface naming scheme 'v257'. Oct 28 13:20:52.204044 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:20:52.204269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:20:52.212033 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 13:20:52.213189 augenrules[1483]: No rules Oct 28 13:20:52.214714 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:20:52.214980 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:20:52.221135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:52.221343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:52.222555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:20:52.226977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 13:20:52.234991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:20:52.237995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:20:52.239926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:52.239964 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:52.240032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:52.240836 systemd[1]: Finished ensure-sysext.service. Oct 28 13:20:52.243618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:20:52.243829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:20:52.246418 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 13:20:52.246658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 13:20:52.248758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:20:52.249005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:20:52.251138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:20:52.253584 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:20:52.253774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:20:52.271182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 13:20:52.273426 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:20:52.273551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 13:20:52.276985 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 13:20:52.301945 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 13:20:52.305238 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 13:20:52.336913 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 28 13:20:52.385636 systemd-networkd[1515]: lo: Link UP Oct 28 13:20:52.385650 systemd-networkd[1515]: lo: Gained carrier Oct 28 13:20:52.388127 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 13:20:52.390830 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 13:20:52.392987 systemd[1]: Reached target network.target - Network. Oct 28 13:20:52.394739 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 13:20:52.399019 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 13:20:52.402985 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 13:20:52.411768 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:52.411780 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 13:20:52.413222 systemd-networkd[1515]: eth0: Link UP Oct 28 13:20:52.413396 systemd-networkd[1515]: eth0: Gained carrier Oct 28 13:20:52.413418 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:52.419042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 13:20:52.429939 systemd-networkd[1515]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 13:20:52.431624 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Oct 28 13:20:52.431929 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 13:20:53.727287 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 13:20:53.727339 systemd-timesyncd[1516]: Initial clock synchronization to Tue 2025-10-28 13:20:53.727174 UTC. Oct 28 13:20:53.727389 systemd-resolved[1308]: Clock change detected. Flushing caches. Oct 28 13:20:53.747069 kernel: mousedev: PS/2 mouse device common for all mice Oct 28 13:20:53.753267 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 13:20:53.757587 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 13:20:53.775190 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 28 13:20:53.780675 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 28 13:20:53.780952 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 28 13:20:53.781300 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 28 13:20:53.787302 kernel: ACPI: button: Power Button [PWRF] Oct 28 13:20:53.887343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:53.910363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:20:53.910659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:53.915319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:53.958094 kernel: kvm_amd: TSC scaling supported Oct 28 13:20:53.958148 kernel: kvm_amd: Nested Virtualization enabled Oct 28 13:20:53.958186 kernel: kvm_amd: Nested Paging enabled Oct 28 13:20:53.958202 kernel: kvm_amd: LBR virtualization supported Oct 28 13:20:53.958219 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 28 13:20:53.958232 kernel: kvm_amd: Virtual GIF supported Oct 28 13:20:53.992073 kernel: EDAC MC: Ver: 3.0.0 Oct 28 13:20:54.014965 ldconfig[1454]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 13:20:54.017321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:54.022312 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 13:20:54.025637 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 13:20:54.063038 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 13:20:54.065079 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 13:20:54.066887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 13:20:54.068896 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 13:20:54.070906 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 28 13:20:54.072924 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 13:20:54.074768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 13:20:54.076795 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 13:20:54.078810 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 13:20:54.078843 systemd[1]: Reached target paths.target - Path Units. Oct 28 13:20:54.080320 systemd[1]: Reached target timers.target - Timer Units. Oct 28 13:20:54.082764 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 13:20:54.085933 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 13:20:54.089404 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 13:20:54.091593 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 13:20:54.093638 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 13:20:54.098137 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 13:20:54.100036 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 13:20:54.102462 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 13:20:54.104855 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 13:20:54.106374 systemd[1]: Reached target basic.target - Basic System. Oct 28 13:20:54.107884 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 13:20:54.107914 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 13:20:54.108933 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 13:20:54.111592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 13:20:54.113973 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 13:20:54.116689 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 13:20:54.119181 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 13:20:54.121730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 13:20:54.122747 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 28 13:20:54.130415 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 13:20:54.134951 jq[1579]: false Oct 28 13:20:54.136108 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 13:20:54.138323 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing passwd entry cache Oct 28 13:20:54.138333 oslogin_cache_refresh[1581]: Refreshing passwd entry cache Oct 28 13:20:54.141166 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 13:20:54.143708 extend-filesystems[1580]: Found /dev/vda6 Oct 28 13:20:54.144394 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 13:20:54.148360 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting users, quitting Oct 28 13:20:54.148355 oslogin_cache_refresh[1581]: Failure getting users, quitting Oct 28 13:20:54.148448 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 13:20:54.148448 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing group entry cache Oct 28 13:20:54.148372 oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 13:20:54.148423 oslogin_cache_refresh[1581]: Refreshing group entry cache Oct 28 13:20:54.148750 extend-filesystems[1580]: Found /dev/vda9 Oct 28 13:20:54.151108 extend-filesystems[1580]: Checking size of /dev/vda9 Oct 28 13:20:54.157013 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 13:20:54.158793 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 13:20:54.158862 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting groups, quitting Oct 28 13:20:54.158862 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 13:20:54.158853 oslogin_cache_refresh[1581]: Failure getting groups, quitting Oct 28 13:20:54.158863 oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 13:20:54.159385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 13:20:54.162699 extend-filesystems[1580]: Resized partition /dev/vda9 Oct 28 13:20:54.163196 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 13:20:54.164972 extend-filesystems[1604]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 13:20:54.168004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 13:20:54.172115 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 13:20:54.172167 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 13:20:54.175020 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 13:20:54.175372 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 13:20:54.175759 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 28 13:20:54.176583 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 28 13:20:54.179943 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 13:20:54.185569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 13:20:54.189229 jq[1606]: true Oct 28 13:20:54.189235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 13:20:54.190113 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 13:20:54.201333 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 13:20:54.213472 update_engine[1600]: I20251028 13:20:54.213119 1600 main.cc:92] Flatcar Update Engine starting Oct 28 13:20:54.226128 jq[1610]: true Oct 28 13:20:54.226236 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 13:20:54.226236 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 13:20:54.226236 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 13:20:54.230189 extend-filesystems[1580]: Resized filesystem in /dev/vda9 Oct 28 13:20:54.237039 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 13:20:54.237431 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 13:20:54.245263 tar[1608]: linux-amd64/LICENSE Oct 28 13:20:54.245457 tar[1608]: linux-amd64/helm Oct 28 13:20:54.269353 dbus-daemon[1577]: [system] SELinux support is enabled Oct 28 13:20:54.269567 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 13:20:54.273364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 13:20:54.273395 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 13:20:54.275418 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 13:20:54.275438 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 13:20:54.281693 update_engine[1600]: I20251028 13:20:54.280972 1600 update_check_scheduler.cc:74] Next update check in 3m12s Oct 28 13:20:54.281192 systemd[1]: Started update-engine.service - Update Engine. Oct 28 13:20:54.283483 systemd-logind[1598]: Watching system buttons on /dev/input/event2 (Power Button) Oct 28 13:20:54.283745 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 28 13:20:54.285263 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 13:20:54.288662 systemd-logind[1598]: New seat seat0. Oct 28 13:20:54.290670 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 13:20:54.312099 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Oct 28 13:20:54.314063 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 13:20:54.316816 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 13:20:54.365990 locksmithd[1647]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 13:20:54.436834 containerd[1612]: time="2025-10-28T13:20:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 13:20:54.439152 containerd[1612]: time="2025-10-28T13:20:54.439120626Z" level=info msg="starting containerd" revision=cb1076646aa3740577fafbf3d914198b7fe8e3f7 version=v2.1.4 Oct 28 13:20:54.449067 containerd[1612]: time="2025-10-28T13:20:54.448697423Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.626µs" Oct 28 13:20:54.449067 containerd[1612]: time="2025-10-28T13:20:54.448740734Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 13:20:54.449067 containerd[1612]: time="2025-10-28T13:20:54.448785378Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 13:20:54.449067 containerd[1612]: time="2025-10-28T13:20:54.448796649Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 13:20:54.449067 containerd[1612]: time="2025-10-28T13:20:54.448968170Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 13:20:54.449067 containerd[1612]: time="2025-10-28T13:20:54.448981345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 13:20:54.449464 containerd[1612]: time="2025-10-28T13:20:54.449041969Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 13:20:54.449508 containerd[1612]: time="2025-10-28T13:20:54.449495389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.449818 containerd[1612]: time="2025-10-28T13:20:54.449800631Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.449867 containerd[1612]: time="2025-10-28T13:20:54.449856716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 13:20:54.449910 containerd[1612]: time="2025-10-28T13:20:54.449899787Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 13:20:54.449949 containerd[1612]: time="2025-10-28T13:20:54.449939151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.450181 containerd[1612]: time="2025-10-28T13:20:54.450163962Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.450231 containerd[1612]: time="2025-10-28T13:20:54.450220659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 13:20:54.450369 containerd[1612]: time="2025-10-28T13:20:54.450355191Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.450625 containerd[1612]: time="2025-10-28T13:20:54.450608556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.450706 containerd[1612]: time="2025-10-28T13:20:54.450692173Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 13:20:54.450749 containerd[1612]: time="2025-10-28T13:20:54.450738510Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 13:20:54.450820 containerd[1612]: time="2025-10-28T13:20:54.450808010Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 13:20:54.451224 containerd[1612]: time="2025-10-28T13:20:54.451200436Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 13:20:54.451375 containerd[1612]: time="2025-10-28T13:20:54.451360786Z" level=info msg="metadata content store policy set" policy=shared Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.457930965Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.457977483Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458131822Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458147021Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458157981Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458168150Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458182818Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458194460Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458206222Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458216952Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458226189Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458234455Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458245726Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 13:20:54.459085 containerd[1612]: time="2025-10-28T13:20:54.458350292Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458367003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458379617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458390227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458408371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458420463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458432215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458444939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458455178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458467552Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458477320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458499051Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458550437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458562139Z" level=info msg="Start snapshots syncer" Oct 28 13:20:54.459335 containerd[1612]: time="2025-10-28T13:20:54.458585373Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 13:20:54.459586 containerd[1612]: time="2025-10-28T13:20:54.458800586Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 13:20:54.459586 containerd[1612]: time="2025-10-28T13:20:54.458842424Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 13:20:54.459696 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 13:20:54.459899 containerd[1612]: time="2025-10-28T13:20:54.458908248Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 13:20:54.459899 containerd[1612]: time="2025-10-28T13:20:54.458998066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 13:20:54.459899 containerd[1612]: time="2025-10-28T13:20:54.459015319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 13:20:54.459899 containerd[1612]: time="2025-10-28T13:20:54.459024917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 13:20:54.459899 containerd[1612]: time="2025-10-28T13:20:54.459033904Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 13:20:54.460012 containerd[1612]: time="2025-10-28T13:20:54.459045285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 13:20:54.460081 containerd[1612]: time="2025-10-28T13:20:54.460067481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 13:20:54.460129 containerd[1612]: time="2025-10-28T13:20:54.460118267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 13:20:54.460185 containerd[1612]: time="2025-10-28T13:20:54.460173641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 13:20:54.460230 containerd[1612]: time="2025-10-28T13:20:54.460220288Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 13:20:54.460296 containerd[1612]: time="2025-10-28T13:20:54.460285380Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 13:20:54.460344 containerd[1612]: time="2025-10-28T13:20:54.460332819Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 13:20:54.460390 containerd[1612]: time="2025-10-28T13:20:54.460380599Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 13:20:54.460438 containerd[1612]: time="2025-10-28T13:20:54.460427366Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 13:20:54.460486 containerd[1612]: time="2025-10-28T13:20:54.460474925Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 13:20:54.460540 containerd[1612]: time="2025-10-28T13:20:54.460529558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 13:20:54.460585 containerd[1612]: time="2025-10-28T13:20:54.460575123Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 13:20:54.460643 containerd[1612]: time="2025-10-28T13:20:54.460632912Z" level=info msg="runtime interface created" Oct 28 13:20:54.460686 containerd[1612]: time="2025-10-28T13:20:54.460676704Z" level=info msg="created NRI interface" Oct 28 13:20:54.460728 containerd[1612]: time="2025-10-28T13:20:54.460718352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 13:20:54.460771 containerd[1612]: time="2025-10-28T13:20:54.460762194Z" level=info msg="Connect containerd service" Oct 28 13:20:54.460824 containerd[1612]: time="2025-10-28T13:20:54.460813360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 13:20:54.461574 containerd[1612]: time="2025-10-28T13:20:54.461554530Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 13:20:54.483281 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 13:20:54.487818 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 13:20:54.506731 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 13:20:54.507026 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 13:20:54.510530 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 13:20:54.536178 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 13:20:54.541378 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 13:20:54.545219 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 28 13:20:54.547440 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 13:20:54.553477 tar[1608]: linux-amd64/README.md Oct 28 13:20:54.562344 containerd[1612]: time="2025-10-28T13:20:54.562307522Z" level=info msg="Start subscribing containerd event" Oct 28 13:20:54.562458 containerd[1612]: time="2025-10-28T13:20:54.562356323Z" level=info msg="Start recovering state" Oct 28 13:20:54.562458 containerd[1612]: time="2025-10-28T13:20:54.562433678Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 13:20:54.562458 containerd[1612]: time="2025-10-28T13:20:54.562455419Z" level=info msg="Start event monitor" Oct 28 13:20:54.562521 containerd[1612]: time="2025-10-28T13:20:54.562467081Z" level=info msg="Start cni network conf syncer for default" Oct 28 13:20:54.562521 containerd[1612]: time="2025-10-28T13:20:54.562476338Z" level=info msg="Start streaming server" Oct 28 13:20:54.562521 containerd[1612]: time="2025-10-28T13:20:54.562485275Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 13:20:54.562521 containerd[1612]: time="2025-10-28T13:20:54.562497257Z" level=info msg="runtime interface starting up..." Oct 28 13:20:54.562521 containerd[1612]: time="2025-10-28T13:20:54.562503068Z" level=info msg="starting plugins..." Oct 28 13:20:54.562625 containerd[1612]: time="2025-10-28T13:20:54.562526292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 13:20:54.562646 containerd[1612]: time="2025-10-28T13:20:54.562503158Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 13:20:54.562794 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 13:20:54.564877 containerd[1612]: time="2025-10-28T13:20:54.562888421Z" level=info msg="containerd successfully booted in 0.126595s" Oct 28 13:20:54.571304 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 13:20:55.120355 systemd-networkd[1515]: eth0: Gained IPv6LL Oct 28 13:20:55.123473 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 13:20:55.126201 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 13:20:55.129573 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 13:20:55.132677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:55.135507 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 13:20:55.159956 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 13:20:55.162369 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 13:20:55.162627 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 13:20:55.165664 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 13:20:55.853574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:55.856102 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 13:20:55.858132 systemd[1]: Startup finished in 2.238s (kernel) + 5.528s (initrd) + 4.296s (userspace) = 12.063s. Oct 28 13:20:55.863331 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:20:56.275616 kubelet[1716]: E1028 13:20:56.275544 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:20:56.279516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:20:56.279720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:20:56.280106 systemd[1]: kubelet.service: Consumed 970ms CPU time, 264M memory peak. Oct 28 13:20:57.618463 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 13:20:57.619676 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:57034.service - OpenSSH per-connection server daemon (10.0.0.1:57034). Oct 28 13:20:57.702382 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 57034 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:57.704343 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:57.710454 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 13:20:57.711517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 13:20:57.716978 systemd-logind[1598]: New session 1 of user core. Oct 28 13:20:57.737476 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 13:20:57.740273 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 13:20:57.754241 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 13:20:57.756427 systemd-logind[1598]: New session c1 of user core. Oct 28 13:20:57.901558 systemd[1734]: Queued start job for default target default.target. Oct 28 13:20:57.919186 systemd[1734]: Created slice app.slice - User Application Slice. Oct 28 13:20:57.919226 systemd[1734]: Reached target paths.target - Paths. Oct 28 13:20:57.919302 systemd[1734]: Reached target timers.target - Timers. Oct 28 13:20:57.921865 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 13:20:57.932681 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 13:20:57.932803 systemd[1734]: Reached target sockets.target - Sockets. Oct 28 13:20:57.932843 systemd[1734]: Reached target basic.target - Basic System. Oct 28 13:20:57.932883 systemd[1734]: Reached target default.target - Main User Target. Oct 28 13:20:57.932918 systemd[1734]: Startup finished in 170ms. Oct 28 13:20:57.933218 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 13:20:57.934758 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 13:20:57.999904 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:57048.service - OpenSSH per-connection server daemon (10.0.0.1:57048). Oct 28 13:20:58.060354 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:58.061705 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:58.065948 systemd-logind[1598]: New session 2 of user core. Oct 28 13:20:58.082201 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 13:20:58.134737 sshd[1748]: Connection closed by 10.0.0.1 port 57048 Oct 28 13:20:58.135201 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:58.149480 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:57048.service: Deactivated successfully. Oct 28 13:20:58.151019 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 13:20:58.151800 systemd-logind[1598]: Session 2 logged out. Waiting for processes to exit. Oct 28 13:20:58.154159 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:57050.service - OpenSSH per-connection server daemon (10.0.0.1:57050). Oct 28 13:20:58.154907 systemd-logind[1598]: Removed session 2. Oct 28 13:20:58.213653 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 57050 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:58.214808 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:58.218835 systemd-logind[1598]: New session 3 of user core. Oct 28 13:20:58.233215 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 13:20:58.282297 sshd[1757]: Connection closed by 10.0.0.1 port 57050 Oct 28 13:20:58.282613 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:58.291072 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:57050.service: Deactivated successfully. Oct 28 13:20:58.293203 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 13:20:58.294032 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Oct 28 13:20:58.296701 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:57054.service - OpenSSH per-connection server daemon (10.0.0.1:57054). Oct 28 13:20:58.297282 systemd-logind[1598]: Removed session 3. Oct 28 13:20:58.344890 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:58.346133 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:58.350313 systemd-logind[1598]: New session 4 of user core. Oct 28 13:20:58.360174 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 13:20:58.413921 sshd[1766]: Connection closed by 10.0.0.1 port 57054 Oct 28 13:20:58.414185 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:58.428473 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:57054.service: Deactivated successfully. Oct 28 13:20:58.430146 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 13:20:58.430798 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Oct 28 13:20:58.433404 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:57062.service - OpenSSH per-connection server daemon (10.0.0.1:57062). Oct 28 13:20:58.434038 systemd-logind[1598]: Removed session 4. Oct 28 13:20:58.497168 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 57062 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:58.498452 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:58.502560 systemd-logind[1598]: New session 5 of user core. Oct 28 13:20:58.520194 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 13:20:58.580841 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 13:20:58.581155 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:58.604530 sudo[1776]: pam_unix(sudo:session): session closed for user root Oct 28 13:20:58.606301 sshd[1775]: Connection closed by 10.0.0.1 port 57062 Oct 28 13:20:58.606676 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:58.622631 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:57062.service: Deactivated successfully. Oct 28 13:20:58.624402 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 13:20:58.625140 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Oct 28 13:20:58.627681 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:57068.service - OpenSSH per-connection server daemon (10.0.0.1:57068). Oct 28 13:20:58.628222 systemd-logind[1598]: Removed session 5. Oct 28 13:20:58.685090 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 57068 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:58.686445 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:58.690502 systemd-logind[1598]: New session 6 of user core. Oct 28 13:20:58.706166 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 13:20:58.759307 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 13:20:58.759612 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:58.767138 sudo[1787]: pam_unix(sudo:session): session closed for user root Oct 28 13:20:58.773566 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 13:20:58.773849 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:58.783063 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:20:58.835309 augenrules[1809]: No rules Oct 28 13:20:58.836937 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:20:58.837233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:20:58.838336 sudo[1786]: pam_unix(sudo:session): session closed for user root Oct 28 13:20:58.840010 sshd[1785]: Connection closed by 10.0.0.1 port 57068 Oct 28 13:20:58.840326 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:58.852568 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:57068.service: Deactivated successfully. Oct 28 13:20:58.854375 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 13:20:58.855083 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Oct 28 13:20:58.857591 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:57074.service - OpenSSH per-connection server daemon (10.0.0.1:57074). Oct 28 13:20:58.858101 systemd-logind[1598]: Removed session 6. Oct 28 13:20:58.912890 sshd[1818]: Accepted publickey for core from 10.0.0.1 port 57074 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:20:58.914122 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:58.918345 systemd-logind[1598]: New session 7 of user core. Oct 28 13:20:58.946191 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 13:20:58.999078 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 13:20:58.999375 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:59.369268 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 13:20:59.381396 (dockerd)[1842]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 13:20:59.637249 dockerd[1842]: time="2025-10-28T13:20:59.637132076Z" level=info msg="Starting up" Oct 28 13:20:59.637954 dockerd[1842]: time="2025-10-28T13:20:59.637935142Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 13:20:59.648915 dockerd[1842]: time="2025-10-28T13:20:59.648867911Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 13:21:00.471485 dockerd[1842]: time="2025-10-28T13:21:00.471425276Z" level=info msg="Loading containers: start." Oct 28 13:21:00.483113 kernel: Initializing XFRM netlink socket Oct 28 13:21:00.744396 systemd-networkd[1515]: docker0: Link UP Oct 28 13:21:00.750932 dockerd[1842]: time="2025-10-28T13:21:00.750881005Z" level=info msg="Loading containers: done." Oct 28 13:21:00.764756 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2316408929-merged.mount: Deactivated successfully. Oct 28 13:21:00.766439 dockerd[1842]: time="2025-10-28T13:21:00.766384334Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 13:21:00.766558 dockerd[1842]: time="2025-10-28T13:21:00.766479482Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 13:21:00.766608 dockerd[1842]: time="2025-10-28T13:21:00.766591532Z" level=info msg="Initializing buildkit" Oct 28 13:21:00.796087 dockerd[1842]: time="2025-10-28T13:21:00.796029669Z" level=info msg="Completed buildkit initialization" Oct 28 13:21:00.802659 dockerd[1842]: time="2025-10-28T13:21:00.802613724Z" level=info msg="Daemon has completed initialization" Oct 28 13:21:00.802746 dockerd[1842]: time="2025-10-28T13:21:00.802705857Z" level=info msg="API listen on /run/docker.sock" Oct 28 13:21:00.802961 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 13:21:01.567828 containerd[1612]: time="2025-10-28T13:21:01.567774851Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 28 13:21:02.152687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338830387.mount: Deactivated successfully. Oct 28 13:21:02.886014 containerd[1612]: time="2025-10-28T13:21:02.885951933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:02.886753 containerd[1612]: time="2025-10-28T13:21:02.886700666Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=27191533" Oct 28 13:21:02.887808 containerd[1612]: time="2025-10-28T13:21:02.887775852Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:02.890144 containerd[1612]: time="2025-10-28T13:21:02.890096654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:02.890995 containerd[1612]: time="2025-10-28T13:21:02.890970202Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.323152451s" Oct 28 13:21:02.891032 containerd[1612]: time="2025-10-28T13:21:02.891000919Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 28 13:21:02.891567 containerd[1612]: time="2025-10-28T13:21:02.891542525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 28 13:21:03.998839 containerd[1612]: time="2025-10-28T13:21:03.998766269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:03.999614 containerd[1612]: time="2025-10-28T13:21:03.999551171Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24778872" Oct 28 13:21:04.000668 containerd[1612]: time="2025-10-28T13:21:04.000623221Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:04.003448 containerd[1612]: time="2025-10-28T13:21:04.003391712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:04.004238 containerd[1612]: time="2025-10-28T13:21:04.004210828Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.112624932s" Oct 28 13:21:04.004238 containerd[1612]: time="2025-10-28T13:21:04.004238309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 28 13:21:04.004769 containerd[1612]: time="2025-10-28T13:21:04.004713210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 28 13:21:05.515417 containerd[1612]: time="2025-10-28T13:21:05.515318654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:05.516416 containerd[1612]: time="2025-10-28T13:21:05.516340991Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19170904" Oct 28 13:21:05.517532 containerd[1612]: time="2025-10-28T13:21:05.517483764Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:05.520011 containerd[1612]: time="2025-10-28T13:21:05.519970817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:05.520794 containerd[1612]: time="2025-10-28T13:21:05.520768513Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.516014988s" Oct 28 13:21:05.520831 containerd[1612]: time="2025-10-28T13:21:05.520797087Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 28 13:21:05.521197 containerd[1612]: time="2025-10-28T13:21:05.521174885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 28 13:21:06.530100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 13:21:06.531651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:21:06.553982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857294707.mount: Deactivated successfully. Oct 28 13:21:06.739425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:21:06.749347 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:21:06.788866 kubelet[2145]: E1028 13:21:06.788726 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:21:06.795605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:21:06.795808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:21:06.796201 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.5M memory peak. Oct 28 13:21:07.753314 containerd[1612]: time="2025-10-28T13:21:07.753245567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:07.754103 containerd[1612]: time="2025-10-28T13:21:07.754075553Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=0" Oct 28 13:21:07.755164 containerd[1612]: time="2025-10-28T13:21:07.755140269Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:07.757344 containerd[1612]: time="2025-10-28T13:21:07.757249494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:07.757996 containerd[1612]: time="2025-10-28T13:21:07.757942634Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.236738735s" Oct 28 13:21:07.758029 containerd[1612]: time="2025-10-28T13:21:07.757992267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 28 13:21:07.758600 containerd[1612]: time="2025-10-28T13:21:07.758553108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 28 13:21:08.269434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140884316.mount: Deactivated successfully. Oct 28 13:21:08.843259 containerd[1612]: time="2025-10-28T13:21:08.843196925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:08.844098 containerd[1612]: time="2025-10-28T13:21:08.844039275Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Oct 28 13:21:08.845347 containerd[1612]: time="2025-10-28T13:21:08.845296312Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:08.847821 containerd[1612]: time="2025-10-28T13:21:08.847794746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:08.848847 containerd[1612]: time="2025-10-28T13:21:08.848814669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.090205615s" Oct 28 13:21:08.848847 containerd[1612]: time="2025-10-28T13:21:08.848845116Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 28 13:21:08.849451 containerd[1612]: time="2025-10-28T13:21:08.849350974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 28 13:21:09.331574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854758128.mount: Deactivated successfully. Oct 28 13:21:09.336691 containerd[1612]: time="2025-10-28T13:21:09.336637707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:21:09.337426 containerd[1612]: time="2025-10-28T13:21:09.337382954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Oct 28 13:21:09.338621 containerd[1612]: time="2025-10-28T13:21:09.338575380Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:21:09.340435 containerd[1612]: time="2025-10-28T13:21:09.340397607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:21:09.340922 containerd[1612]: time="2025-10-28T13:21:09.340888467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 491.511895ms" Oct 28 13:21:09.340922 containerd[1612]: time="2025-10-28T13:21:09.340914836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 28 13:21:09.341398 containerd[1612]: time="2025-10-28T13:21:09.341369379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 28 13:21:09.897840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920452139.mount: Deactivated successfully. Oct 28 13:21:12.534834 containerd[1612]: time="2025-10-28T13:21:12.534777578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:12.535658 containerd[1612]: time="2025-10-28T13:21:12.535629916Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Oct 28 13:21:12.536886 containerd[1612]: time="2025-10-28T13:21:12.536855564Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:12.539505 containerd[1612]: time="2025-10-28T13:21:12.539467251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:12.540697 containerd[1612]: time="2025-10-28T13:21:12.540661621Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.199267165s" Oct 28 13:21:12.540697 containerd[1612]: time="2025-10-28T13:21:12.540690755Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 28 13:21:15.001231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:21:15.001443 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.5M memory peak. Oct 28 13:21:15.003660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:21:15.028402 systemd[1]: Reload requested from client PID 2297 ('systemctl') (unit session-7.scope)... Oct 28 13:21:15.028417 systemd[1]: Reloading... Oct 28 13:21:15.100150 zram_generator::config[2340]: No configuration found. Oct 28 13:21:15.447773 systemd[1]: Reloading finished in 418 ms. Oct 28 13:21:15.513994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 13:21:15.514124 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 13:21:15.514457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:21:15.514500 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.4M memory peak. Oct 28 13:21:15.516352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:21:15.698625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:21:15.702487 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 13:21:15.744368 kubelet[2388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:21:15.744368 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 13:21:15.744368 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:21:15.744748 kubelet[2388]: I1028 13:21:15.744432 2388 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 13:21:16.251770 kubelet[2388]: I1028 13:21:16.251728 2388 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 28 13:21:16.251770 kubelet[2388]: I1028 13:21:16.251756 2388 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 13:21:16.252031 kubelet[2388]: I1028 13:21:16.251999 2388 server.go:954] "Client rotation is on, will bootstrap in background" Oct 28 13:21:16.280144 kubelet[2388]: E1028 13:21:16.280087 2388 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:16.280712 kubelet[2388]: I1028 13:21:16.280684 2388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:21:16.286591 kubelet[2388]: I1028 13:21:16.286568 2388 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 13:21:16.291903 kubelet[2388]: I1028 13:21:16.291859 2388 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 13:21:16.292143 kubelet[2388]: I1028 13:21:16.292104 2388 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 13:21:16.292316 kubelet[2388]: I1028 13:21:16.292130 2388 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 13:21:16.292748 kubelet[2388]: I1028 13:21:16.292723 2388 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 13:21:16.292748 kubelet[2388]: I1028 13:21:16.292736 2388 container_manager_linux.go:304] "Creating device plugin manager" Oct 28 13:21:16.292887 kubelet[2388]: I1028 13:21:16.292870 2388 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:21:16.295744 kubelet[2388]: I1028 13:21:16.295709 2388 kubelet.go:446] "Attempting to sync node with API server" Oct 28 13:21:16.295744 kubelet[2388]: I1028 13:21:16.295739 2388 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 13:21:16.295827 kubelet[2388]: I1028 13:21:16.295767 2388 kubelet.go:352] "Adding apiserver pod source" Oct 28 13:21:16.295827 kubelet[2388]: I1028 13:21:16.295777 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 13:21:16.298204 kubelet[2388]: I1028 13:21:16.298187 2388 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 28 13:21:16.298551 kubelet[2388]: I1028 13:21:16.298530 2388 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 28 13:21:16.298594 kubelet[2388]: W1028 13:21:16.298587 2388 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 13:21:16.299066 kubelet[2388]: W1028 13:21:16.298994 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:16.299127 kubelet[2388]: E1028 13:21:16.299079 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:16.300217 kubelet[2388]: W1028 13:21:16.300152 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:16.300217 kubelet[2388]: E1028 13:21:16.300215 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:16.300445 kubelet[2388]: I1028 13:21:16.300428 2388 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 13:21:16.300469 kubelet[2388]: I1028 13:21:16.300459 2388 server.go:1287] "Started kubelet" Oct 28 13:21:16.305725 kubelet[2388]: I1028 13:21:16.305562 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 13:21:16.310612 kubelet[2388]: I1028 13:21:16.310573 2388 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 13:21:16.311474 kubelet[2388]: I1028 13:21:16.311453 2388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 13:21:16.312437 kubelet[2388]: I1028 13:21:16.312217 2388 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 13:21:16.312746 kubelet[2388]: E1028 13:21:16.312717 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.314672 kubelet[2388]: I1028 13:21:16.314636 2388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 13:21:16.314818 kubelet[2388]: I1028 13:21:16.314802 2388 server.go:479] "Adding debug handlers to kubelet server" Oct 28 13:21:16.315334 kubelet[2388]: E1028 13:21:16.315288 2388 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 13:21:16.315465 kubelet[2388]: I1028 13:21:16.315433 2388 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 13:21:16.315507 kubelet[2388]: I1028 13:21:16.315475 2388 reconciler.go:26] "Reconciler: start to sync state" Oct 28 13:21:16.316293 kubelet[2388]: W1028 13:21:16.316236 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:16.316293 kubelet[2388]: I1028 13:21:16.316257 2388 factory.go:221] Registration of the containerd container factory successfully Oct 28 13:21:16.316293 kubelet[2388]: I1028 13:21:16.316287 2388 factory.go:221] Registration of the systemd container factory successfully Oct 28 13:21:16.316293 kubelet[2388]: E1028 13:21:16.316290 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:16.319065 kubelet[2388]: E1028 13:21:16.316332 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872aa52abba575d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 13:21:16.300441437 +0000 UTC m=+0.594496642,LastTimestamp:2025-10-28 13:21:16.300441437 +0000 UTC m=+0.594496642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 13:21:16.319065 kubelet[2388]: E1028 13:21:16.317634 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Oct 28 13:21:16.319065 kubelet[2388]: I1028 13:21:16.317794 2388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 13:21:16.319065 kubelet[2388]: I1028 13:21:16.318041 2388 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 13:21:16.324603 kubelet[2388]: I1028 13:21:16.324565 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 28 13:21:16.326070 kubelet[2388]: I1028 13:21:16.325789 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 28 13:21:16.326070 kubelet[2388]: I1028 13:21:16.325814 2388 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 28 13:21:16.326070 kubelet[2388]: I1028 13:21:16.325834 2388 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 13:21:16.326070 kubelet[2388]: I1028 13:21:16.325841 2388 kubelet.go:2382] "Starting kubelet main sync loop" Oct 28 13:21:16.326070 kubelet[2388]: E1028 13:21:16.325881 2388 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 13:21:16.330745 kubelet[2388]: I1028 13:21:16.330723 2388 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 13:21:16.330745 kubelet[2388]: I1028 13:21:16.330737 2388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 13:21:16.330830 kubelet[2388]: I1028 13:21:16.330751 2388 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:21:16.413621 kubelet[2388]: E1028 13:21:16.413550 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.426987 kubelet[2388]: E1028 13:21:16.426940 2388 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 28 13:21:16.514349 kubelet[2388]: E1028 13:21:16.514257 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.518984 kubelet[2388]: E1028 13:21:16.518951 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Oct 28 13:21:16.615286 kubelet[2388]: E1028 13:21:16.615243 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.627649 kubelet[2388]: E1028 13:21:16.627606 2388 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 28 13:21:16.716081 kubelet[2388]: E1028 13:21:16.716010 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.816634 kubelet[2388]: E1028 13:21:16.816496 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.917237 kubelet[2388]: E1028 13:21:16.917183 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:16.919762 kubelet[2388]: E1028 13:21:16.919724 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Oct 28 13:21:16.980575 kubelet[2388]: W1028 13:21:16.980492 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:16.980713 kubelet[2388]: E1028 13:21:16.980599 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:16.980713 kubelet[2388]: I1028 13:21:16.980662 2388 policy_none.go:49] "None policy: Start" Oct 28 13:21:16.980713 kubelet[2388]: I1028 13:21:16.980697 2388 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 13:21:16.980713 kubelet[2388]: I1028 13:21:16.980709 2388 state_mem.go:35] "Initializing new in-memory state store" Oct 28 13:21:16.987708 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 13:21:17.002033 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 13:21:17.004939 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 13:21:17.016902 kubelet[2388]: I1028 13:21:17.016855 2388 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 28 13:21:17.017114 kubelet[2388]: I1028 13:21:17.017090 2388 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 13:21:17.017180 kubelet[2388]: I1028 13:21:17.017105 2388 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 13:21:17.017515 kubelet[2388]: I1028 13:21:17.017332 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 13:21:17.018216 kubelet[2388]: E1028 13:21:17.018196 2388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 13:21:17.018259 kubelet[2388]: E1028 13:21:17.018233 2388 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 13:21:17.035212 systemd[1]: Created slice kubepods-burstable-pod4541f38471ae6665e759731ddad80ad5.slice - libcontainer container kubepods-burstable-pod4541f38471ae6665e759731ddad80ad5.slice. Oct 28 13:21:17.062644 kubelet[2388]: E1028 13:21:17.062600 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:17.066509 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 28 13:21:17.068587 kubelet[2388]: E1028 13:21:17.068499 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:17.070746 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 28 13:21:17.072403 kubelet[2388]: E1028 13:21:17.072384 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:17.118937 kubelet[2388]: I1028 13:21:17.118895 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:21:17.119315 kubelet[2388]: E1028 13:21:17.119280 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 28 13:21:17.120454 kubelet[2388]: I1028 13:21:17.120431 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:17.120526 kubelet[2388]: I1028 13:21:17.120463 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:17.120526 kubelet[2388]: I1028 13:21:17.120489 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:17.120526 kubelet[2388]: I1028 13:21:17.120510 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4541f38471ae6665e759731ddad80ad5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4541f38471ae6665e759731ddad80ad5\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:17.120602 kubelet[2388]: I1028 13:21:17.120531 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:17.120602 kubelet[2388]: I1028 13:21:17.120553 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:17.120602 kubelet[2388]: I1028 13:21:17.120582 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:17.120682 kubelet[2388]: I1028 13:21:17.120603 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4541f38471ae6665e759731ddad80ad5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4541f38471ae6665e759731ddad80ad5\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:17.120682 kubelet[2388]: I1028 13:21:17.120621 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4541f38471ae6665e759731ddad80ad5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4541f38471ae6665e759731ddad80ad5\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:17.244507 kubelet[2388]: W1028 13:21:17.244455 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:17.244578 kubelet[2388]: E1028 13:21:17.244511 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:17.320922 kubelet[2388]: I1028 13:21:17.320815 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:21:17.321092 kubelet[2388]: E1028 13:21:17.321069 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 28 13:21:17.363273 kubelet[2388]: E1028 13:21:17.363228 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:17.363744 containerd[1612]: time="2025-10-28T13:21:17.363699816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4541f38471ae6665e759731ddad80ad5,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:17.368906 kubelet[2388]: E1028 13:21:17.368874 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:17.369526 containerd[1612]: time="2025-10-28T13:21:17.369486025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:17.372834 kubelet[2388]: E1028 13:21:17.372802 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:17.373122 containerd[1612]: time="2025-10-28T13:21:17.373095864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:17.406753 kubelet[2388]: W1028 13:21:17.406690 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:17.406809 kubelet[2388]: E1028 13:21:17.406756 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:17.475972 kubelet[2388]: W1028 13:21:17.475923 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:17.475972 kubelet[2388]: E1028 13:21:17.475955 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:17.720829 kubelet[2388]: E1028 13:21:17.720741 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Oct 28 13:21:17.722664 kubelet[2388]: I1028 13:21:17.722623 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:21:17.722965 kubelet[2388]: E1028 13:21:17.722933 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 28 13:21:17.886795 kubelet[2388]: W1028 13:21:17.886742 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Oct 28 13:21:17.886795 kubelet[2388]: E1028 13:21:17.886785 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:18.326526 kubelet[2388]: E1028 13:21:18.326465 2388 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Oct 28 13:21:18.524105 kubelet[2388]: I1028 13:21:18.524036 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:21:18.524491 kubelet[2388]: E1028 13:21:18.524438 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 28 13:21:18.540108 containerd[1612]: time="2025-10-28T13:21:18.540006340Z" level=info msg="connecting to shim df5630314c4f3c4db8279fe0356409f7b7f109ff253bc80988695f25938b7463" address="unix:///run/containerd/s/807c2a8cbc40eb4851b34364c5d6532d61fd245564528b6ef505c42cc87a2460" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:18.549464 containerd[1612]: time="2025-10-28T13:21:18.549411846Z" level=info msg="connecting to shim f39b13e6ab3d1a68240bd10c22b2e3c12e7d75c412a2b5976b62b0ccb8fc64eb" address="unix:///run/containerd/s/8dfeada73b80724d6dcd3e8120a03fcc547e696bbfcc61ba556c6972fa9db31a" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:18.549982 containerd[1612]: time="2025-10-28T13:21:18.549819640Z" level=info msg="connecting to shim ff60a97be77ea771f788bf47dcea4d470e9e3edf24971b3ea205a6c78a17fe83" address="unix:///run/containerd/s/890c35a0aa6bd768e401ec5d0f944f6bec4138136e500376e665c15716104ad1" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:18.570259 systemd[1]: Started cri-containerd-df5630314c4f3c4db8279fe0356409f7b7f109ff253bc80988695f25938b7463.scope - libcontainer container df5630314c4f3c4db8279fe0356409f7b7f109ff253bc80988695f25938b7463. Oct 28 13:21:18.574806 systemd[1]: Started cri-containerd-ff60a97be77ea771f788bf47dcea4d470e9e3edf24971b3ea205a6c78a17fe83.scope - libcontainer container ff60a97be77ea771f788bf47dcea4d470e9e3edf24971b3ea205a6c78a17fe83. Oct 28 13:21:18.578544 systemd[1]: Started cri-containerd-f39b13e6ab3d1a68240bd10c22b2e3c12e7d75c412a2b5976b62b0ccb8fc64eb.scope - libcontainer container f39b13e6ab3d1a68240bd10c22b2e3c12e7d75c412a2b5976b62b0ccb8fc64eb. Oct 28 13:21:18.627867 containerd[1612]: time="2025-10-28T13:21:18.627790766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5630314c4f3c4db8279fe0356409f7b7f109ff253bc80988695f25938b7463\"" Oct 28 13:21:18.629290 containerd[1612]: time="2025-10-28T13:21:18.629264710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4541f38471ae6665e759731ddad80ad5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f39b13e6ab3d1a68240bd10c22b2e3c12e7d75c412a2b5976b62b0ccb8fc64eb\"" Oct 28 13:21:18.629967 kubelet[2388]: E1028 13:21:18.629925 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:18.630231 kubelet[2388]: E1028 13:21:18.630198 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:18.631133 containerd[1612]: time="2025-10-28T13:21:18.631070345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff60a97be77ea771f788bf47dcea4d470e9e3edf24971b3ea205a6c78a17fe83\"" Oct 28 13:21:18.631483 containerd[1612]: time="2025-10-28T13:21:18.631453734Z" level=info msg="CreateContainer within sandbox \"df5630314c4f3c4db8279fe0356409f7b7f109ff253bc80988695f25938b7463\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 13:21:18.631595 kubelet[2388]: E1028 13:21:18.631557 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:18.631800 containerd[1612]: time="2025-10-28T13:21:18.631694035Z" level=info msg="CreateContainer within sandbox \"f39b13e6ab3d1a68240bd10c22b2e3c12e7d75c412a2b5976b62b0ccb8fc64eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 13:21:18.633244 containerd[1612]: time="2025-10-28T13:21:18.633210829Z" level=info msg="CreateContainer within sandbox \"ff60a97be77ea771f788bf47dcea4d470e9e3edf24971b3ea205a6c78a17fe83\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 13:21:18.644413 containerd[1612]: time="2025-10-28T13:21:18.644378087Z" level=info msg="Container 0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:18.648034 containerd[1612]: time="2025-10-28T13:21:18.647992645Z" level=info msg="Container 176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:18.652599 containerd[1612]: time="2025-10-28T13:21:18.652548096Z" level=info msg="CreateContainer within sandbox \"df5630314c4f3c4db8279fe0356409f7b7f109ff253bc80988695f25938b7463\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee\"" Oct 28 13:21:18.653024 containerd[1612]: time="2025-10-28T13:21:18.652990416Z" level=info msg="StartContainer for \"0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee\"" Oct 28 13:21:18.654065 containerd[1612]: time="2025-10-28T13:21:18.654029153Z" level=info msg="connecting to shim 0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee" address="unix:///run/containerd/s/807c2a8cbc40eb4851b34364c5d6532d61fd245564528b6ef505c42cc87a2460" protocol=ttrpc version=3 Oct 28 13:21:18.655292 containerd[1612]: time="2025-10-28T13:21:18.655266654Z" level=info msg="Container 9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:18.659554 containerd[1612]: time="2025-10-28T13:21:18.659506994Z" level=info msg="CreateContainer within sandbox \"f39b13e6ab3d1a68240bd10c22b2e3c12e7d75c412a2b5976b62b0ccb8fc64eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551\"" Oct 28 13:21:18.659868 containerd[1612]: time="2025-10-28T13:21:18.659847022Z" level=info msg="StartContainer for \"176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551\"" Oct 28 13:21:18.660864 containerd[1612]: time="2025-10-28T13:21:18.660833511Z" level=info msg="connecting to shim 176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551" address="unix:///run/containerd/s/8dfeada73b80724d6dcd3e8120a03fcc547e696bbfcc61ba556c6972fa9db31a" protocol=ttrpc version=3 Oct 28 13:21:18.665200 containerd[1612]: time="2025-10-28T13:21:18.665163280Z" level=info msg="CreateContainer within sandbox \"ff60a97be77ea771f788bf47dcea4d470e9e3edf24971b3ea205a6c78a17fe83\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7\"" Oct 28 13:21:18.667066 containerd[1612]: time="2025-10-28T13:21:18.666145652Z" level=info msg="StartContainer for \"9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7\"" Oct 28 13:21:18.667066 containerd[1612]: time="2025-10-28T13:21:18.667018188Z" level=info msg="connecting to shim 9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7" address="unix:///run/containerd/s/890c35a0aa6bd768e401ec5d0f944f6bec4138136e500376e665c15716104ad1" protocol=ttrpc version=3 Oct 28 13:21:18.673242 systemd[1]: Started cri-containerd-0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee.scope - libcontainer container 0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee. Oct 28 13:21:18.685185 systemd[1]: Started cri-containerd-176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551.scope - libcontainer container 176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551. Oct 28 13:21:18.688336 systemd[1]: Started cri-containerd-9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7.scope - libcontainer container 9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7. Oct 28 13:21:18.734630 containerd[1612]: time="2025-10-28T13:21:18.734599042Z" level=info msg="StartContainer for \"0070a4ee51d3093080db8d3eeb1abb3f30bd3198b7ec66473102bec574998eee\" returns successfully" Oct 28 13:21:18.739834 containerd[1612]: time="2025-10-28T13:21:18.739790165Z" level=info msg="StartContainer for \"9f14281207cfda5d0367deca4ed8c6bfdce7a67a6d6bdc367112f4879d20e0f7\" returns successfully" Oct 28 13:21:18.750638 containerd[1612]: time="2025-10-28T13:21:18.750598090Z" level=info msg="StartContainer for \"176aee7b845cae01ce96313c20fff94950a7bdf88f9db0e11317ac04c7140551\" returns successfully" Oct 28 13:21:19.338804 kubelet[2388]: E1028 13:21:19.338759 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:19.339251 kubelet[2388]: E1028 13:21:19.338876 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:19.343252 kubelet[2388]: E1028 13:21:19.342422 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:19.343252 kubelet[2388]: E1028 13:21:19.342546 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:19.348954 kubelet[2388]: E1028 13:21:19.348898 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:19.349710 kubelet[2388]: E1028 13:21:19.349634 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:19.839992 kubelet[2388]: E1028 13:21:19.839906 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 13:21:20.125827 kubelet[2388]: I1028 13:21:20.125713 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:21:20.136930 kubelet[2388]: I1028 13:21:20.136714 2388 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 13:21:20.136930 kubelet[2388]: E1028 13:21:20.136757 2388 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 28 13:21:20.145225 kubelet[2388]: E1028 13:21:20.145184 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:20.245715 kubelet[2388]: E1028 13:21:20.245666 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:20.346122 kubelet[2388]: E1028 13:21:20.346068 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:21:20.350150 kubelet[2388]: E1028 13:21:20.350134 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:20.350256 kubelet[2388]: E1028 13:21:20.350243 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:20.350313 kubelet[2388]: E1028 13:21:20.350299 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:20.350402 kubelet[2388]: E1028 13:21:20.350390 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:20.350456 kubelet[2388]: E1028 13:21:20.350445 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:21:20.350526 kubelet[2388]: E1028 13:21:20.350516 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:20.513422 kubelet[2388]: I1028 13:21:20.513364 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:20.517780 kubelet[2388]: E1028 13:21:20.517749 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:20.517780 kubelet[2388]: I1028 13:21:20.517770 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:20.518876 kubelet[2388]: E1028 13:21:20.518838 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:20.518876 kubelet[2388]: I1028 13:21:20.518856 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:20.519754 kubelet[2388]: E1028 13:21:20.519733 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:21.301790 kubelet[2388]: I1028 13:21:21.301736 2388 apiserver.go:52] "Watching apiserver" Oct 28 13:21:21.316237 kubelet[2388]: I1028 13:21:21.316164 2388 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 13:21:23.169182 systemd[1]: Reload requested from client PID 2660 ('systemctl') (unit session-7.scope)... Oct 28 13:21:23.169204 systemd[1]: Reloading... Oct 28 13:21:23.254233 zram_generator::config[2704]: No configuration found. Oct 28 13:21:23.509958 systemd[1]: Reloading finished in 340 ms. Oct 28 13:21:23.535036 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:21:23.553556 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 13:21:23.553933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:21:23.553995 systemd[1]: kubelet.service: Consumed 1.077s CPU time, 133.2M memory peak. Oct 28 13:21:23.555982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:21:23.797994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:21:23.808422 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 13:21:23.849481 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:21:23.849481 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 13:21:23.849481 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:21:23.849999 kubelet[2749]: I1028 13:21:23.849557 2749 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 13:21:23.856687 kubelet[2749]: I1028 13:21:23.856646 2749 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 28 13:21:23.856687 kubelet[2749]: I1028 13:21:23.856678 2749 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 13:21:23.857016 kubelet[2749]: I1028 13:21:23.856997 2749 server.go:954] "Client rotation is on, will bootstrap in background" Oct 28 13:21:23.858447 kubelet[2749]: I1028 13:21:23.858422 2749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 28 13:21:23.861002 kubelet[2749]: I1028 13:21:23.860978 2749 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:21:23.865292 kubelet[2749]: I1028 13:21:23.865273 2749 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 13:21:23.869598 kubelet[2749]: I1028 13:21:23.869563 2749 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 13:21:23.869848 kubelet[2749]: I1028 13:21:23.869813 2749 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 13:21:23.869998 kubelet[2749]: I1028 13:21:23.869844 2749 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 13:21:23.869998 kubelet[2749]: I1028 13:21:23.869999 2749 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 13:21:23.870115 kubelet[2749]: I1028 13:21:23.870007 2749 container_manager_linux.go:304] "Creating device plugin manager" Oct 28 13:21:23.870115 kubelet[2749]: I1028 13:21:23.870072 2749 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:21:23.870245 kubelet[2749]: I1028 13:21:23.870222 2749 kubelet.go:446] "Attempting to sync node with API server" Oct 28 13:21:23.870291 kubelet[2749]: I1028 13:21:23.870251 2749 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 13:21:23.870291 kubelet[2749]: I1028 13:21:23.870278 2749 kubelet.go:352] "Adding apiserver pod source" Oct 28 13:21:23.870291 kubelet[2749]: I1028 13:21:23.870288 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 13:21:23.871038 kubelet[2749]: I1028 13:21:23.871015 2749 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 28 13:21:23.871409 kubelet[2749]: I1028 13:21:23.871387 2749 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 28 13:21:23.873393 kubelet[2749]: I1028 13:21:23.871756 2749 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 13:21:23.873393 kubelet[2749]: I1028 13:21:23.871785 2749 server.go:1287] "Started kubelet" Oct 28 13:21:23.877682 kubelet[2749]: I1028 13:21:23.875797 2749 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 13:21:23.877682 kubelet[2749]: I1028 13:21:23.876780 2749 server.go:479] "Adding debug handlers to kubelet server" Oct 28 13:21:23.878726 kubelet[2749]: I1028 13:21:23.878701 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 13:21:23.881074 kubelet[2749]: I1028 13:21:23.880102 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 13:21:23.881467 kubelet[2749]: I1028 13:21:23.881414 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 13:21:23.882268 kubelet[2749]: I1028 13:21:23.882252 2749 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 13:21:23.883468 kubelet[2749]: I1028 13:21:23.883453 2749 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 13:21:23.883740 kubelet[2749]: I1028 13:21:23.883704 2749 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 13:21:23.883879 kubelet[2749]: I1028 13:21:23.883863 2749 reconciler.go:26] "Reconciler: start to sync state" Oct 28 13:21:23.884545 kubelet[2749]: I1028 13:21:23.884507 2749 factory.go:221] Registration of the systemd container factory successfully Oct 28 13:21:23.884845 kubelet[2749]: I1028 13:21:23.884615 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 13:21:23.884914 kubelet[2749]: E1028 13:21:23.884849 2749 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 13:21:23.886439 kubelet[2749]: I1028 13:21:23.886416 2749 factory.go:221] Registration of the containerd container factory successfully Oct 28 13:21:23.998418 kubelet[2749]: I1028 13:21:23.998365 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 28 13:21:23.999774 kubelet[2749]: I1028 13:21:23.999627 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 28 13:21:23.999774 kubelet[2749]: I1028 13:21:23.999665 2749 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 28 13:21:23.999774 kubelet[2749]: I1028 13:21:23.999686 2749 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 13:21:23.999774 kubelet[2749]: I1028 13:21:23.999694 2749 kubelet.go:2382] "Starting kubelet main sync loop" Oct 28 13:21:23.999774 kubelet[2749]: E1028 13:21:23.999747 2749 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 13:21:24.036261 kubelet[2749]: I1028 13:21:24.036221 2749 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 13:21:24.036261 kubelet[2749]: I1028 13:21:24.036247 2749 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 13:21:24.036261 kubelet[2749]: I1028 13:21:24.036273 2749 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:21:24.036541 kubelet[2749]: I1028 13:21:24.036503 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 13:21:24.036541 kubelet[2749]: I1028 13:21:24.036516 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 13:21:24.036541 kubelet[2749]: I1028 13:21:24.036542 2749 policy_none.go:49] "None policy: Start" Oct 28 13:21:24.036676 kubelet[2749]: I1028 13:21:24.036555 2749 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 13:21:24.036676 kubelet[2749]: I1028 13:21:24.036568 2749 state_mem.go:35] "Initializing new in-memory state store" Oct 28 13:21:24.036753 kubelet[2749]: I1028 13:21:24.036716 2749 state_mem.go:75] "Updated machine memory state" Oct 28 13:21:24.042908 kubelet[2749]: I1028 13:21:24.042858 2749 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 28 13:21:24.043214 kubelet[2749]: I1028 13:21:24.043185 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 13:21:24.043265 kubelet[2749]: I1028 13:21:24.043208 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 13:21:24.043660 kubelet[2749]: I1028 13:21:24.043615 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 13:21:24.044730 kubelet[2749]: E1028 13:21:24.044607 2749 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 13:21:24.101410 kubelet[2749]: I1028 13:21:24.100982 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:24.101410 kubelet[2749]: I1028 13:21:24.101179 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:24.101410 kubelet[2749]: I1028 13:21:24.101198 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:24.151808 kubelet[2749]: I1028 13:21:24.151777 2749 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:21:24.156783 kubelet[2749]: I1028 13:21:24.156748 2749 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 13:21:24.156924 kubelet[2749]: I1028 13:21:24.156814 2749 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 13:21:24.188386 kubelet[2749]: I1028 13:21:24.188317 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:24.188386 kubelet[2749]: I1028 13:21:24.188381 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:24.188624 kubelet[2749]: I1028 13:21:24.188423 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:24.188624 kubelet[2749]: I1028 13:21:24.188449 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4541f38471ae6665e759731ddad80ad5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4541f38471ae6665e759731ddad80ad5\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:24.188624 kubelet[2749]: I1028 13:21:24.188473 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:24.188624 kubelet[2749]: I1028 13:21:24.188504 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:24.188624 kubelet[2749]: I1028 13:21:24.188526 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:21:24.188769 kubelet[2749]: I1028 13:21:24.188547 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4541f38471ae6665e759731ddad80ad5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4541f38471ae6665e759731ddad80ad5\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:24.188769 kubelet[2749]: I1028 13:21:24.188570 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4541f38471ae6665e759731ddad80ad5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4541f38471ae6665e759731ddad80ad5\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:24.407533 kubelet[2749]: E1028 13:21:24.407374 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:24.408247 kubelet[2749]: E1028 13:21:24.407901 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:24.408293 kubelet[2749]: E1028 13:21:24.408244 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:24.871074 kubelet[2749]: I1028 13:21:24.871018 2749 apiserver.go:52] "Watching apiserver" Oct 28 13:21:24.884663 kubelet[2749]: I1028 13:21:24.884595 2749 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 13:21:25.017177 kubelet[2749]: I1028 13:21:25.017135 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:25.017177 kubelet[2749]: I1028 13:21:25.017170 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:25.017588 kubelet[2749]: E1028 13:21:25.017544 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:25.026219 kubelet[2749]: E1028 13:21:25.026178 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 13:21:25.026396 kubelet[2749]: E1028 13:21:25.026364 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:25.026667 kubelet[2749]: E1028 13:21:25.026626 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 13:21:25.026826 kubelet[2749]: E1028 13:21:25.026729 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:25.039885 kubelet[2749]: I1028 13:21:25.039700 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.039679877 podStartE2EDuration="1.039679877s" podCreationTimestamp="2025-10-28 13:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:25.039167496 +0000 UTC m=+1.226664627" watchObservedRunningTime="2025-10-28 13:21:25.039679877 +0000 UTC m=+1.227177008" Oct 28 13:21:25.047965 kubelet[2749]: I1028 13:21:25.047886 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.047868661 podStartE2EDuration="1.047868661s" podCreationTimestamp="2025-10-28 13:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:25.047834346 +0000 UTC m=+1.235331487" watchObservedRunningTime="2025-10-28 13:21:25.047868661 +0000 UTC m=+1.235365792" Oct 28 13:21:25.061406 kubelet[2749]: I1028 13:21:25.061339 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.061322246 podStartE2EDuration="1.061322246s" podCreationTimestamp="2025-10-28 13:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:25.054185224 +0000 UTC m=+1.241682355" watchObservedRunningTime="2025-10-28 13:21:25.061322246 +0000 UTC m=+1.248819387" Oct 28 13:21:26.018163 kubelet[2749]: E1028 13:21:26.018131 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:26.018611 kubelet[2749]: E1028 13:21:26.018198 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:27.019684 kubelet[2749]: E1028 13:21:27.019596 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:27.020233 kubelet[2749]: E1028 13:21:27.019697 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:30.415349 kubelet[2749]: I1028 13:21:30.415302 2749 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 13:21:30.415803 containerd[1612]: time="2025-10-28T13:21:30.415658157Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 13:21:30.416127 kubelet[2749]: I1028 13:21:30.415852 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 13:21:32.028417 systemd[1]: Created slice kubepods-besteffort-pod3c05bcf7_0bba_4070_aa18_026e62fd01a2.slice - libcontainer container kubepods-besteffort-pod3c05bcf7_0bba_4070_aa18_026e62fd01a2.slice. Oct 28 13:21:32.035003 kubelet[2749]: I1028 13:21:32.034954 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c05bcf7-0bba-4070-aa18-026e62fd01a2-kube-proxy\") pod \"kube-proxy-xbrv5\" (UID: \"3c05bcf7-0bba-4070-aa18-026e62fd01a2\") " pod="kube-system/kube-proxy-xbrv5" Oct 28 13:21:32.036816 kubelet[2749]: I1028 13:21:32.036743 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c05bcf7-0bba-4070-aa18-026e62fd01a2-lib-modules\") pod \"kube-proxy-xbrv5\" (UID: \"3c05bcf7-0bba-4070-aa18-026e62fd01a2\") " pod="kube-system/kube-proxy-xbrv5" Oct 28 13:21:32.036816 kubelet[2749]: I1028 13:21:32.036823 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnp5\" (UniqueName: \"kubernetes.io/projected/3c05bcf7-0bba-4070-aa18-026e62fd01a2-kube-api-access-fqnp5\") pod \"kube-proxy-xbrv5\" (UID: \"3c05bcf7-0bba-4070-aa18-026e62fd01a2\") " pod="kube-system/kube-proxy-xbrv5" Oct 28 13:21:32.037517 kubelet[2749]: I1028 13:21:32.036853 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c05bcf7-0bba-4070-aa18-026e62fd01a2-xtables-lock\") pod \"kube-proxy-xbrv5\" (UID: \"3c05bcf7-0bba-4070-aa18-026e62fd01a2\") " pod="kube-system/kube-proxy-xbrv5" Oct 28 13:21:32.078784 systemd[1]: Created slice kubepods-besteffort-pode88a349d_586c_40a4_9106_7fa25fd6e116.slice - libcontainer container kubepods-besteffort-pode88a349d_586c_40a4_9106_7fa25fd6e116.slice. Oct 28 13:21:32.137629 kubelet[2749]: I1028 13:21:32.137530 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e88a349d-586c-40a4-9106-7fa25fd6e116-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rm56w\" (UID: \"e88a349d-586c-40a4-9106-7fa25fd6e116\") " pod="tigera-operator/tigera-operator-7dcd859c48-rm56w" Oct 28 13:21:32.137828 kubelet[2749]: I1028 13:21:32.137766 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84h62\" (UniqueName: \"kubernetes.io/projected/e88a349d-586c-40a4-9106-7fa25fd6e116-kube-api-access-84h62\") pod \"tigera-operator-7dcd859c48-rm56w\" (UID: \"e88a349d-586c-40a4-9106-7fa25fd6e116\") " pod="tigera-operator/tigera-operator-7dcd859c48-rm56w" Oct 28 13:21:32.339793 kubelet[2749]: E1028 13:21:32.339629 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:32.340566 containerd[1612]: time="2025-10-28T13:21:32.340525045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xbrv5,Uid:3c05bcf7-0bba-4070-aa18-026e62fd01a2,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:32.385300 containerd[1612]: time="2025-10-28T13:21:32.385251663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rm56w,Uid:e88a349d-586c-40a4-9106-7fa25fd6e116,Namespace:tigera-operator,Attempt:0,}" Oct 28 13:21:32.582706 kubelet[2749]: E1028 13:21:32.582672 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:32.707559 containerd[1612]: time="2025-10-28T13:21:32.707299807Z" level=info msg="connecting to shim 0cf3f01719afe4d4cdbdc3f07c0eee4e7fda0c50ad37f96e53bee5817415e2de" address="unix:///run/containerd/s/08a989b774f015858507b818b671e69fd74b61f49f0c2e91479452aa81b23dd4" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:32.710951 containerd[1612]: time="2025-10-28T13:21:32.710815984Z" level=info msg="connecting to shim f27749a5c147ddd8ae91f3c3417a16617bfb7ad4aba83a5d67485971986b1835" address="unix:///run/containerd/s/a5e1fd8f92c2441f2a0f1f20c453b11285f9d69122ab74f6746c82819887f3ee" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:32.749348 systemd[1]: Started cri-containerd-0cf3f01719afe4d4cdbdc3f07c0eee4e7fda0c50ad37f96e53bee5817415e2de.scope - libcontainer container 0cf3f01719afe4d4cdbdc3f07c0eee4e7fda0c50ad37f96e53bee5817415e2de. Oct 28 13:21:32.790114 containerd[1612]: time="2025-10-28T13:21:32.790068232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xbrv5,Uid:3c05bcf7-0bba-4070-aa18-026e62fd01a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf3f01719afe4d4cdbdc3f07c0eee4e7fda0c50ad37f96e53bee5817415e2de\"" Oct 28 13:21:32.791538 kubelet[2749]: E1028 13:21:32.791515 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:32.793896 containerd[1612]: time="2025-10-28T13:21:32.793842520Z" level=info msg="CreateContainer within sandbox \"0cf3f01719afe4d4cdbdc3f07c0eee4e7fda0c50ad37f96e53bee5817415e2de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 13:21:32.798621 systemd[1]: Started cri-containerd-f27749a5c147ddd8ae91f3c3417a16617bfb7ad4aba83a5d67485971986b1835.scope - libcontainer container f27749a5c147ddd8ae91f3c3417a16617bfb7ad4aba83a5d67485971986b1835. Oct 28 13:21:32.813081 containerd[1612]: time="2025-10-28T13:21:32.812958840Z" level=info msg="Container 28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:32.822495 containerd[1612]: time="2025-10-28T13:21:32.822428508Z" level=info msg="CreateContainer within sandbox \"0cf3f01719afe4d4cdbdc3f07c0eee4e7fda0c50ad37f96e53bee5817415e2de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79\"" Oct 28 13:21:32.823139 containerd[1612]: time="2025-10-28T13:21:32.823108888Z" level=info msg="StartContainer for \"28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79\"" Oct 28 13:21:32.824603 containerd[1612]: time="2025-10-28T13:21:32.824572681Z" level=info msg="connecting to shim 28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79" address="unix:///run/containerd/s/08a989b774f015858507b818b671e69fd74b61f49f0c2e91479452aa81b23dd4" protocol=ttrpc version=3 Oct 28 13:21:32.842336 systemd[1]: Started cri-containerd-28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79.scope - libcontainer container 28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79. Oct 28 13:21:32.858033 containerd[1612]: time="2025-10-28T13:21:32.857962616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rm56w,Uid:e88a349d-586c-40a4-9106-7fa25fd6e116,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f27749a5c147ddd8ae91f3c3417a16617bfb7ad4aba83a5d67485971986b1835\"" Oct 28 13:21:32.861795 containerd[1612]: time="2025-10-28T13:21:32.861429578Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 13:21:32.903289 containerd[1612]: time="2025-10-28T13:21:32.903239362Z" level=info msg="StartContainer for \"28addfad6b42d341f6aa6d64625397c0922c5baf567516065cd0bab84900ec79\" returns successfully" Oct 28 13:21:33.031608 kubelet[2749]: E1028 13:21:33.031482 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:33.031608 kubelet[2749]: E1028 13:21:33.031514 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:33.051370 kubelet[2749]: I1028 13:21:33.051303 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xbrv5" podStartSLOduration=2.051279929 podStartE2EDuration="2.051279929s" podCreationTimestamp="2025-10-28 13:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:33.050556126 +0000 UTC m=+9.238053257" watchObservedRunningTime="2025-10-28 13:21:33.051279929 +0000 UTC m=+9.238777060" Oct 28 13:21:34.280171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860255021.mount: Deactivated successfully. Oct 28 13:21:34.389954 kubelet[2749]: E1028 13:21:34.389891 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:35.034230 kubelet[2749]: E1028 13:21:35.034179 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:35.887591 kubelet[2749]: E1028 13:21:35.887555 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:36.036306 kubelet[2749]: E1028 13:21:36.036266 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:36.061090 kubelet[2749]: E1028 13:21:36.036350 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:36.772969 containerd[1612]: time="2025-10-28T13:21:36.772913041Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:36.815832 containerd[1612]: time="2025-10-28T13:21:36.815786799Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Oct 28 13:21:36.817547 containerd[1612]: time="2025-10-28T13:21:36.817219219Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:36.912950 containerd[1612]: time="2025-10-28T13:21:36.912877380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:36.913664 containerd[1612]: time="2025-10-28T13:21:36.913623288Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.052159013s" Oct 28 13:21:36.913664 containerd[1612]: time="2025-10-28T13:21:36.913650300Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 28 13:21:36.915725 containerd[1612]: time="2025-10-28T13:21:36.915686823Z" level=info msg="CreateContainer within sandbox \"f27749a5c147ddd8ae91f3c3417a16617bfb7ad4aba83a5d67485971986b1835\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 13:21:37.381042 containerd[1612]: time="2025-10-28T13:21:37.380993041Z" level=info msg="Container 2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:37.436340 containerd[1612]: time="2025-10-28T13:21:37.436290396Z" level=info msg="CreateContainer within sandbox \"f27749a5c147ddd8ae91f3c3417a16617bfb7ad4aba83a5d67485971986b1835\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852\"" Oct 28 13:21:37.436857 containerd[1612]: time="2025-10-28T13:21:37.436826657Z" level=info msg="StartContainer for \"2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852\"" Oct 28 13:21:37.442204 containerd[1612]: time="2025-10-28T13:21:37.442136220Z" level=info msg="connecting to shim 2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852" address="unix:///run/containerd/s/a5e1fd8f92c2441f2a0f1f20c453b11285f9d69122ab74f6746c82819887f3ee" protocol=ttrpc version=3 Oct 28 13:21:37.463193 systemd[1]: Started cri-containerd-2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852.scope - libcontainer container 2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852. Oct 28 13:21:37.627629 containerd[1612]: time="2025-10-28T13:21:37.627562707Z" level=info msg="StartContainer for \"2f19592bb05ec18d9434dea2d07c576c37ab06102ca8789e354038902d395852\" returns successfully" Oct 28 13:21:38.215474 kubelet[2749]: I1028 13:21:38.215394 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rm56w" podStartSLOduration=2.161371333 podStartE2EDuration="6.215373353s" podCreationTimestamp="2025-10-28 13:21:32 +0000 UTC" firstStartedPulling="2025-10-28 13:21:32.860399407 +0000 UTC m=+9.047896538" lastFinishedPulling="2025-10-28 13:21:36.914401427 +0000 UTC m=+13.101898558" observedRunningTime="2025-10-28 13:21:38.21510569 +0000 UTC m=+14.402602821" watchObservedRunningTime="2025-10-28 13:21:38.215373353 +0000 UTC m=+14.402870484" Oct 28 13:21:39.795837 update_engine[1600]: I20251028 13:21:39.795739 1600 update_attempter.cc:509] Updating boot flags... Oct 28 13:21:43.947621 sudo[1822]: pam_unix(sudo:session): session closed for user root Oct 28 13:21:43.949717 sshd[1821]: Connection closed by 10.0.0.1 port 57074 Oct 28 13:21:43.950371 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:43.958466 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:57074.service: Deactivated successfully. Oct 28 13:21:43.964239 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 13:21:43.966160 systemd[1]: session-7.scope: Consumed 4.385s CPU time, 217.8M memory peak. Oct 28 13:21:43.971028 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Oct 28 13:21:43.973622 systemd-logind[1598]: Removed session 7. Oct 28 13:21:48.118981 systemd[1]: Created slice kubepods-besteffort-pod4e84c9ee_4669_4dfc_a188_07fcff823765.slice - libcontainer container kubepods-besteffort-pod4e84c9ee_4669_4dfc_a188_07fcff823765.slice. Oct 28 13:21:48.143851 kubelet[2749]: I1028 13:21:48.143803 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4e84c9ee-4669-4dfc-a188-07fcff823765-typha-certs\") pod \"calico-typha-6b4597cc8f-n4h49\" (UID: \"4e84c9ee-4669-4dfc-a188-07fcff823765\") " pod="calico-system/calico-typha-6b4597cc8f-n4h49" Oct 28 13:21:48.144419 kubelet[2749]: I1028 13:21:48.144401 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg55r\" (UniqueName: \"kubernetes.io/projected/4e84c9ee-4669-4dfc-a188-07fcff823765-kube-api-access-zg55r\") pod \"calico-typha-6b4597cc8f-n4h49\" (UID: \"4e84c9ee-4669-4dfc-a188-07fcff823765\") " pod="calico-system/calico-typha-6b4597cc8f-n4h49" Oct 28 13:21:48.144546 kubelet[2749]: I1028 13:21:48.144532 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e84c9ee-4669-4dfc-a188-07fcff823765-tigera-ca-bundle\") pod \"calico-typha-6b4597cc8f-n4h49\" (UID: \"4e84c9ee-4669-4dfc-a188-07fcff823765\") " pod="calico-system/calico-typha-6b4597cc8f-n4h49" Oct 28 13:21:48.205610 systemd[1]: Created slice kubepods-besteffort-podaf10ff44_ca1d_4b77_bf07_ecd174badccd.slice - libcontainer container kubepods-besteffort-podaf10ff44_ca1d_4b77_bf07_ecd174badccd.slice. Oct 28 13:21:48.245716 kubelet[2749]: I1028 13:21:48.245683 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-cni-net-dir\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245716 kubelet[2749]: I1028 13:21:48.245714 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af10ff44-ca1d-4b77-bf07-ecd174badccd-tigera-ca-bundle\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245890 kubelet[2749]: I1028 13:21:48.245741 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-cni-log-dir\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245890 kubelet[2749]: I1028 13:21:48.245756 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblv2\" (UniqueName: \"kubernetes.io/projected/af10ff44-ca1d-4b77-bf07-ecd174badccd-kube-api-access-tblv2\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245890 kubelet[2749]: I1028 13:21:48.245871 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-flexvol-driver-host\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245890 kubelet[2749]: I1028 13:21:48.245890 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-cni-bin-dir\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245990 kubelet[2749]: I1028 13:21:48.245909 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-lib-modules\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245990 kubelet[2749]: I1028 13:21:48.245922 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-policysync\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245990 kubelet[2749]: I1028 13:21:48.245940 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af10ff44-ca1d-4b77-bf07-ecd174badccd-node-certs\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.245990 kubelet[2749]: I1028 13:21:48.245957 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-xtables-lock\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.246115 kubelet[2749]: I1028 13:21:48.246074 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-var-lib-calico\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.246141 kubelet[2749]: I1028 13:21:48.246115 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af10ff44-ca1d-4b77-bf07-ecd174badccd-var-run-calico\") pod \"calico-node-rn62f\" (UID: \"af10ff44-ca1d-4b77-bf07-ecd174badccd\") " pod="calico-system/calico-node-rn62f" Oct 28 13:21:48.349118 kubelet[2749]: E1028 13:21:48.349026 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.349118 kubelet[2749]: W1028 13:21:48.349060 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.349118 kubelet[2749]: E1028 13:21:48.349089 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.352746 kubelet[2749]: E1028 13:21:48.352701 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.352746 kubelet[2749]: W1028 13:21:48.352726 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.352746 kubelet[2749]: E1028 13:21:48.352748 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.356685 kubelet[2749]: E1028 13:21:48.356655 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.356685 kubelet[2749]: W1028 13:21:48.356678 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.356760 kubelet[2749]: E1028 13:21:48.356698 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.383382 kubelet[2749]: E1028 13:21:48.383256 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:21:48.415104 kubelet[2749]: E1028 13:21:48.414357 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.415104 kubelet[2749]: W1028 13:21:48.415096 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.415278 kubelet[2749]: E1028 13:21:48.415125 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.415328 kubelet[2749]: E1028 13:21:48.415310 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.415328 kubelet[2749]: W1028 13:21:48.415318 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.415328 kubelet[2749]: E1028 13:21:48.415328 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.415518 kubelet[2749]: E1028 13:21:48.415497 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.415518 kubelet[2749]: W1028 13:21:48.415512 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.415518 kubelet[2749]: E1028 13:21:48.415522 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.415807 kubelet[2749]: E1028 13:21:48.415788 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.415807 kubelet[2749]: W1028 13:21:48.415800 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.415807 kubelet[2749]: E1028 13:21:48.415810 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.416036 kubelet[2749]: E1028 13:21:48.416020 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.416036 kubelet[2749]: W1028 13:21:48.416030 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.416106 kubelet[2749]: E1028 13:21:48.416039 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.416292 kubelet[2749]: E1028 13:21:48.416261 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.416292 kubelet[2749]: W1028 13:21:48.416283 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.416345 kubelet[2749]: E1028 13:21:48.416306 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.416592 kubelet[2749]: E1028 13:21:48.416578 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.416592 kubelet[2749]: W1028 13:21:48.416589 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.416651 kubelet[2749]: E1028 13:21:48.416598 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.416875 kubelet[2749]: E1028 13:21:48.416847 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.416875 kubelet[2749]: W1028 13:21:48.416860 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.416875 kubelet[2749]: E1028 13:21:48.416869 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.417153 kubelet[2749]: E1028 13:21:48.417039 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.417153 kubelet[2749]: W1028 13:21:48.417046 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.417153 kubelet[2749]: E1028 13:21:48.417070 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.417315 kubelet[2749]: E1028 13:21:48.417263 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.417315 kubelet[2749]: W1028 13:21:48.417274 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.417315 kubelet[2749]: E1028 13:21:48.417286 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.417511 kubelet[2749]: E1028 13:21:48.417490 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.417511 kubelet[2749]: W1028 13:21:48.417500 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.417511 kubelet[2749]: E1028 13:21:48.417509 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.417977 kubelet[2749]: E1028 13:21:48.417721 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.417977 kubelet[2749]: W1028 13:21:48.417732 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.417977 kubelet[2749]: E1028 13:21:48.417742 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.417977 kubelet[2749]: E1028 13:21:48.417909 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.417977 kubelet[2749]: W1028 13:21:48.417919 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.417977 kubelet[2749]: E1028 13:21:48.417926 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.418174 kubelet[2749]: E1028 13:21:48.418117 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.418174 kubelet[2749]: W1028 13:21:48.418126 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.418174 kubelet[2749]: E1028 13:21:48.418136 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.418382 kubelet[2749]: E1028 13:21:48.418342 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.418382 kubelet[2749]: W1028 13:21:48.418352 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.418382 kubelet[2749]: E1028 13:21:48.418362 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.418601 kubelet[2749]: E1028 13:21:48.418582 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.418601 kubelet[2749]: W1028 13:21:48.418593 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.418601 kubelet[2749]: E1028 13:21:48.418601 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.419162 kubelet[2749]: E1028 13:21:48.418756 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.419162 kubelet[2749]: W1028 13:21:48.418764 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.419162 kubelet[2749]: E1028 13:21:48.418771 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.419162 kubelet[2749]: E1028 13:21:48.418908 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.419162 kubelet[2749]: W1028 13:21:48.418914 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.419162 kubelet[2749]: E1028 13:21:48.418925 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.419162 kubelet[2749]: E1028 13:21:48.419108 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.419162 kubelet[2749]: W1028 13:21:48.419115 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.419162 kubelet[2749]: E1028 13:21:48.419123 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.419359 kubelet[2749]: E1028 13:21:48.419268 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.419359 kubelet[2749]: W1028 13:21:48.419275 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.419359 kubelet[2749]: E1028 13:21:48.419282 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.431646 kubelet[2749]: E1028 13:21:48.431584 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:48.432130 containerd[1612]: time="2025-10-28T13:21:48.432096062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4597cc8f-n4h49,Uid:4e84c9ee-4669-4dfc-a188-07fcff823765,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:48.447742 kubelet[2749]: E1028 13:21:48.447688 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.447742 kubelet[2749]: W1028 13:21:48.447727 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.447742 kubelet[2749]: E1028 13:21:48.447744 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.447906 kubelet[2749]: I1028 13:21:48.447766 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f768eb5b-b675-4026-8f12-83b3103b89d1-registration-dir\") pod \"csi-node-driver-4cbn9\" (UID: \"f768eb5b-b675-4026-8f12-83b3103b89d1\") " pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:21:48.450217 kubelet[2749]: E1028 13:21:48.447996 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450217 kubelet[2749]: W1028 13:21:48.448009 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450217 kubelet[2749]: E1028 13:21:48.448018 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450217 kubelet[2749]: I1028 13:21:48.448059 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f768eb5b-b675-4026-8f12-83b3103b89d1-socket-dir\") pod \"csi-node-driver-4cbn9\" (UID: \"f768eb5b-b675-4026-8f12-83b3103b89d1\") " pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:21:48.450217 kubelet[2749]: E1028 13:21:48.448258 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450217 kubelet[2749]: W1028 13:21:48.448266 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450217 kubelet[2749]: E1028 13:21:48.448274 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450217 kubelet[2749]: I1028 13:21:48.448303 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n225x\" (UniqueName: \"kubernetes.io/projected/f768eb5b-b675-4026-8f12-83b3103b89d1-kube-api-access-n225x\") pod \"csi-node-driver-4cbn9\" (UID: \"f768eb5b-b675-4026-8f12-83b3103b89d1\") " pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:21:48.450217 kubelet[2749]: E1028 13:21:48.448519 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450706 kubelet[2749]: W1028 13:21:48.448547 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450706 kubelet[2749]: E1028 13:21:48.448557 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450706 kubelet[2749]: I1028 13:21:48.448569 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f768eb5b-b675-4026-8f12-83b3103b89d1-kubelet-dir\") pod \"csi-node-driver-4cbn9\" (UID: \"f768eb5b-b675-4026-8f12-83b3103b89d1\") " pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:21:48.450706 kubelet[2749]: E1028 13:21:48.448780 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450706 kubelet[2749]: W1028 13:21:48.448790 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450706 kubelet[2749]: E1028 13:21:48.448798 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450706 kubelet[2749]: I1028 13:21:48.448828 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f768eb5b-b675-4026-8f12-83b3103b89d1-varrun\") pod \"csi-node-driver-4cbn9\" (UID: \"f768eb5b-b675-4026-8f12-83b3103b89d1\") " pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:21:48.450706 kubelet[2749]: E1028 13:21:48.448998 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450955 kubelet[2749]: W1028 13:21:48.449005 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450955 kubelet[2749]: E1028 13:21:48.449013 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450955 kubelet[2749]: E1028 13:21:48.449195 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450955 kubelet[2749]: W1028 13:21:48.449259 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450955 kubelet[2749]: E1028 13:21:48.449296 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450955 kubelet[2749]: E1028 13:21:48.449464 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450955 kubelet[2749]: W1028 13:21:48.449488 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.450955 kubelet[2749]: E1028 13:21:48.449529 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.450955 kubelet[2749]: E1028 13:21:48.449682 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.450955 kubelet[2749]: W1028 13:21:48.449690 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.449742 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.449891 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.451294 kubelet[2749]: W1028 13:21:48.449897 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.449930 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.450092 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.451294 kubelet[2749]: W1028 13:21:48.450099 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.450146 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.450259 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.451294 kubelet[2749]: W1028 13:21:48.450267 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.451294 kubelet[2749]: E1028 13:21:48.450274 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.452516 kubelet[2749]: E1028 13:21:48.451990 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.452516 kubelet[2749]: W1028 13:21:48.452004 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.452516 kubelet[2749]: E1028 13:21:48.452012 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.452516 kubelet[2749]: E1028 13:21:48.452199 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.452516 kubelet[2749]: W1028 13:21:48.452206 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.452516 kubelet[2749]: E1028 13:21:48.452215 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.452516 kubelet[2749]: E1028 13:21:48.452361 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.452516 kubelet[2749]: W1028 13:21:48.452367 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.452516 kubelet[2749]: E1028 13:21:48.452375 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.454254 containerd[1612]: time="2025-10-28T13:21:48.454206614Z" level=info msg="connecting to shim 90e8715ed3365666eb69b4524dca71519561686f5563f61d088ac7bc65cd6b29" address="unix:///run/containerd/s/fb1fa13fce77d9aa21e77cd89453fecbd599108539d2f70216a9f796797b3e04" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:48.483285 systemd[1]: Started cri-containerd-90e8715ed3365666eb69b4524dca71519561686f5563f61d088ac7bc65cd6b29.scope - libcontainer container 90e8715ed3365666eb69b4524dca71519561686f5563f61d088ac7bc65cd6b29. Oct 28 13:21:48.511662 kubelet[2749]: E1028 13:21:48.511637 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:48.512397 containerd[1612]: time="2025-10-28T13:21:48.512372738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rn62f,Uid:af10ff44-ca1d-4b77-bf07-ecd174badccd,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:48.531647 containerd[1612]: time="2025-10-28T13:21:48.531612460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4597cc8f-n4h49,Uid:4e84c9ee-4669-4dfc-a188-07fcff823765,Namespace:calico-system,Attempt:0,} returns sandbox id \"90e8715ed3365666eb69b4524dca71519561686f5563f61d088ac7bc65cd6b29\"" Oct 28 13:21:48.532179 kubelet[2749]: E1028 13:21:48.532149 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:48.533397 containerd[1612]: time="2025-10-28T13:21:48.533330632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 13:21:48.538234 containerd[1612]: time="2025-10-28T13:21:48.538138221Z" level=info msg="connecting to shim 64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb" address="unix:///run/containerd/s/db8f9ee6bca277799a7974ec148e88972b5203a0e1cc371c04fb61289a53089b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:48.549226 kubelet[2749]: E1028 13:21:48.549206 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.549419 kubelet[2749]: W1028 13:21:48.549343 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.549419 kubelet[2749]: E1028 13:21:48.549364 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.549695 kubelet[2749]: E1028 13:21:48.549636 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.549695 kubelet[2749]: W1028 13:21:48.549647 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.549695 kubelet[2749]: E1028 13:21:48.549660 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.549887 kubelet[2749]: E1028 13:21:48.549874 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.549975 kubelet[2749]: W1028 13:21:48.549885 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.549975 kubelet[2749]: E1028 13:21:48.549924 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.550265 kubelet[2749]: E1028 13:21:48.550243 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.550265 kubelet[2749]: W1028 13:21:48.550263 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.550314 kubelet[2749]: E1028 13:21:48.550295 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.550552 kubelet[2749]: E1028 13:21:48.550540 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.550552 kubelet[2749]: W1028 13:21:48.550550 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.550616 kubelet[2749]: E1028 13:21:48.550568 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.550763 kubelet[2749]: E1028 13:21:48.550752 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.550925 kubelet[2749]: W1028 13:21:48.550906 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.550925 kubelet[2749]: E1028 13:21:48.550920 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.551301 kubelet[2749]: E1028 13:21:48.551143 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.551342 kubelet[2749]: W1028 13:21:48.551302 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.551434 kubelet[2749]: E1028 13:21:48.551387 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.552162 kubelet[2749]: E1028 13:21:48.552124 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.552194 kubelet[2749]: W1028 13:21:48.552168 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.552244 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.552631 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.553900 kubelet[2749]: W1028 13:21:48.552639 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.552836 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.553026 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.553900 kubelet[2749]: W1028 13:21:48.553033 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.553131 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.553448 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.553900 kubelet[2749]: W1028 13:21:48.553466 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.553900 kubelet[2749]: E1028 13:21:48.553518 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.554146 kubelet[2749]: E1028 13:21:48.553693 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.554146 kubelet[2749]: W1028 13:21:48.553700 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.554146 kubelet[2749]: E1028 13:21:48.553744 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.554146 kubelet[2749]: E1028 13:21:48.553910 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.554146 kubelet[2749]: W1028 13:21:48.553917 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.554146 kubelet[2749]: E1028 13:21:48.553954 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.554591 kubelet[2749]: E1028 13:21:48.554333 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.554591 kubelet[2749]: W1028 13:21:48.554344 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.554591 kubelet[2749]: E1028 13:21:48.554365 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.555380 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.559756 kubelet[2749]: W1028 13:21:48.555391 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.555440 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.556609 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.559756 kubelet[2749]: W1028 13:21:48.556618 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.556766 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.557699 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.559756 kubelet[2749]: W1028 13:21:48.557708 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.557763 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.559756 kubelet[2749]: E1028 13:21:48.557962 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561465 kubelet[2749]: W1028 13:21:48.557979 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561465 kubelet[2749]: E1028 13:21:48.558045 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561465 kubelet[2749]: E1028 13:21:48.558285 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561465 kubelet[2749]: W1028 13:21:48.558292 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561465 kubelet[2749]: E1028 13:21:48.558368 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561465 kubelet[2749]: E1028 13:21:48.558572 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561465 kubelet[2749]: W1028 13:21:48.558580 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561465 kubelet[2749]: E1028 13:21:48.558649 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561465 kubelet[2749]: E1028 13:21:48.559176 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561465 kubelet[2749]: W1028 13:21:48.559184 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.559204 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.559471 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561667 kubelet[2749]: W1028 13:21:48.559480 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.559578 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.559759 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561667 kubelet[2749]: W1028 13:21:48.559768 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.559829 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.561341 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561667 kubelet[2749]: W1028 13:21:48.561349 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561667 kubelet[2749]: E1028 13:21:48.561358 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.561862 kubelet[2749]: E1028 13:21:48.561632 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.561862 kubelet[2749]: W1028 13:21:48.561639 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.561862 kubelet[2749]: E1028 13:21:48.561647 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.564305 systemd[1]: Started cri-containerd-64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb.scope - libcontainer container 64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb. Oct 28 13:21:48.573481 kubelet[2749]: E1028 13:21:48.573439 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:48.573481 kubelet[2749]: W1028 13:21:48.573470 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:48.573759 kubelet[2749]: E1028 13:21:48.573490 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:48.629797 containerd[1612]: time="2025-10-28T13:21:48.629742286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rn62f,Uid:af10ff44-ca1d-4b77-bf07-ecd174badccd,Namespace:calico-system,Attempt:0,} returns sandbox id \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\"" Oct 28 13:21:48.630513 kubelet[2749]: E1028 13:21:48.630470 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:49.999521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351949806.mount: Deactivated successfully. Oct 28 13:21:50.001233 kubelet[2749]: E1028 13:21:50.001200 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:21:50.689596 containerd[1612]: time="2025-10-28T13:21:50.689546615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:50.690345 containerd[1612]: time="2025-10-28T13:21:50.690303961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Oct 28 13:21:50.691400 containerd[1612]: time="2025-10-28T13:21:50.691369041Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:50.693230 containerd[1612]: time="2025-10-28T13:21:50.693207205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:50.693661 containerd[1612]: time="2025-10-28T13:21:50.693629927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.160274207s" Oct 28 13:21:50.693661 containerd[1612]: time="2025-10-28T13:21:50.693660605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 28 13:21:50.699688 containerd[1612]: time="2025-10-28T13:21:50.699667213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 13:21:50.711821 containerd[1612]: time="2025-10-28T13:21:50.711781591Z" level=info msg="CreateContainer within sandbox \"90e8715ed3365666eb69b4524dca71519561686f5563f61d088ac7bc65cd6b29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 13:21:50.722068 containerd[1612]: time="2025-10-28T13:21:50.721875908Z" level=info msg="Container 7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:50.730028 containerd[1612]: time="2025-10-28T13:21:50.729982598Z" level=info msg="CreateContainer within sandbox \"90e8715ed3365666eb69b4524dca71519561686f5563f61d088ac7bc65cd6b29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4\"" Oct 28 13:21:50.730599 containerd[1612]: time="2025-10-28T13:21:50.730495960Z" level=info msg="StartContainer for \"7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4\"" Oct 28 13:21:50.731528 containerd[1612]: time="2025-10-28T13:21:50.731493392Z" level=info msg="connecting to shim 7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4" address="unix:///run/containerd/s/fb1fa13fce77d9aa21e77cd89453fecbd599108539d2f70216a9f796797b3e04" protocol=ttrpc version=3 Oct 28 13:21:50.757303 systemd[1]: Started cri-containerd-7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4.scope - libcontainer container 7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4. Oct 28 13:21:50.802715 containerd[1612]: time="2025-10-28T13:21:50.802669825Z" level=info msg="StartContainer for \"7e0b0ce639dee69ebf6b5532b3e22ee89d6c17d8d4828136525f14fef758bbe4\" returns successfully" Oct 28 13:21:51.065974 kubelet[2749]: E1028 13:21:51.065842 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:51.077306 kubelet[2749]: I1028 13:21:51.077231 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b4597cc8f-n4h49" podStartSLOduration=0.910368725 podStartE2EDuration="3.077130983s" podCreationTimestamp="2025-10-28 13:21:48 +0000 UTC" firstStartedPulling="2025-10-28 13:21:48.532810696 +0000 UTC m=+24.720307827" lastFinishedPulling="2025-10-28 13:21:50.699572954 +0000 UTC m=+26.887070085" observedRunningTime="2025-10-28 13:21:51.076995597 +0000 UTC m=+27.264492728" watchObservedRunningTime="2025-10-28 13:21:51.077130983 +0000 UTC m=+27.264628114" Oct 28 13:21:51.138456 kubelet[2749]: E1028 13:21:51.138410 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.138456 kubelet[2749]: W1028 13:21:51.138434 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.138456 kubelet[2749]: E1028 13:21:51.138466 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.138654 kubelet[2749]: E1028 13:21:51.138642 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.138654 kubelet[2749]: W1028 13:21:51.138650 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.138699 kubelet[2749]: E1028 13:21:51.138658 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.138838 kubelet[2749]: E1028 13:21:51.138812 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.138838 kubelet[2749]: W1028 13:21:51.138823 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.138838 kubelet[2749]: E1028 13:21:51.138830 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.139047 kubelet[2749]: E1028 13:21:51.139018 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.139047 kubelet[2749]: W1028 13:21:51.139033 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.139047 kubelet[2749]: E1028 13:21:51.139041 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.139226 kubelet[2749]: E1028 13:21:51.139209 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.139226 kubelet[2749]: W1028 13:21:51.139218 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.139226 kubelet[2749]: E1028 13:21:51.139227 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.139390 kubelet[2749]: E1028 13:21:51.139372 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.139390 kubelet[2749]: W1028 13:21:51.139383 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.139390 kubelet[2749]: E1028 13:21:51.139389 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.139567 kubelet[2749]: E1028 13:21:51.139549 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.139567 kubelet[2749]: W1028 13:21:51.139560 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.139567 kubelet[2749]: E1028 13:21:51.139567 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.139739 kubelet[2749]: E1028 13:21:51.139721 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.139739 kubelet[2749]: W1028 13:21:51.139731 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.139739 kubelet[2749]: E1028 13:21:51.139738 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.139901 kubelet[2749]: E1028 13:21:51.139885 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.139901 kubelet[2749]: W1028 13:21:51.139894 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.139946 kubelet[2749]: E1028 13:21:51.139904 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.140059 kubelet[2749]: E1028 13:21:51.140041 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.140082 kubelet[2749]: W1028 13:21:51.140066 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.140082 kubelet[2749]: E1028 13:21:51.140074 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.140234 kubelet[2749]: E1028 13:21:51.140217 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.140234 kubelet[2749]: W1028 13:21:51.140227 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.140234 kubelet[2749]: E1028 13:21:51.140234 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.140395 kubelet[2749]: E1028 13:21:51.140379 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.140395 kubelet[2749]: W1028 13:21:51.140389 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.140440 kubelet[2749]: E1028 13:21:51.140396 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.140588 kubelet[2749]: E1028 13:21:51.140571 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.140588 kubelet[2749]: W1028 13:21:51.140581 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.140588 kubelet[2749]: E1028 13:21:51.140589 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.140754 kubelet[2749]: E1028 13:21:51.140738 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.140754 kubelet[2749]: W1028 13:21:51.140747 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.140754 kubelet[2749]: E1028 13:21:51.140755 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.140915 kubelet[2749]: E1028 13:21:51.140899 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.140915 kubelet[2749]: W1028 13:21:51.140908 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.140965 kubelet[2749]: E1028 13:21:51.140916 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.171330 kubelet[2749]: E1028 13:21:51.171291 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.171330 kubelet[2749]: W1028 13:21:51.171306 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.171330 kubelet[2749]: E1028 13:21:51.171316 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.171618 kubelet[2749]: E1028 13:21:51.171572 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.171618 kubelet[2749]: W1028 13:21:51.171601 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.171702 kubelet[2749]: E1028 13:21:51.171630 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.171953 kubelet[2749]: E1028 13:21:51.171926 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.171953 kubelet[2749]: W1028 13:21:51.171937 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.171953 kubelet[2749]: E1028 13:21:51.171950 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.172310 kubelet[2749]: E1028 13:21:51.172268 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.172310 kubelet[2749]: W1028 13:21:51.172302 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.172357 kubelet[2749]: E1028 13:21:51.172337 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.172573 kubelet[2749]: E1028 13:21:51.172545 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.172573 kubelet[2749]: W1028 13:21:51.172560 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.172630 kubelet[2749]: E1028 13:21:51.172576 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.172788 kubelet[2749]: E1028 13:21:51.172764 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.172788 kubelet[2749]: W1028 13:21:51.172778 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.172841 kubelet[2749]: E1028 13:21:51.172794 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.173029 kubelet[2749]: E1028 13:21:51.173013 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.173029 kubelet[2749]: W1028 13:21:51.173026 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.173090 kubelet[2749]: E1028 13:21:51.173079 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.173268 kubelet[2749]: E1028 13:21:51.173250 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.173268 kubelet[2749]: W1028 13:21:51.173265 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.173342 kubelet[2749]: E1028 13:21:51.173319 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.173497 kubelet[2749]: E1028 13:21:51.173482 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.173497 kubelet[2749]: W1028 13:21:51.173493 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.173557 kubelet[2749]: E1028 13:21:51.173508 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.173712 kubelet[2749]: E1028 13:21:51.173697 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.173712 kubelet[2749]: W1028 13:21:51.173707 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.173759 kubelet[2749]: E1028 13:21:51.173719 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.173888 kubelet[2749]: E1028 13:21:51.173874 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.173888 kubelet[2749]: W1028 13:21:51.173884 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.173936 kubelet[2749]: E1028 13:21:51.173900 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.174150 kubelet[2749]: E1028 13:21:51.174129 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.174150 kubelet[2749]: W1028 13:21:51.174147 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.174198 kubelet[2749]: E1028 13:21:51.174165 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.174385 kubelet[2749]: E1028 13:21:51.174366 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.174385 kubelet[2749]: W1028 13:21:51.174377 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.174443 kubelet[2749]: E1028 13:21:51.174391 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.174622 kubelet[2749]: E1028 13:21:51.174603 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.174622 kubelet[2749]: W1028 13:21:51.174614 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.174669 kubelet[2749]: E1028 13:21:51.174626 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.174913 kubelet[2749]: E1028 13:21:51.174883 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.174913 kubelet[2749]: W1028 13:21:51.174900 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.174962 kubelet[2749]: E1028 13:21:51.174916 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.175129 kubelet[2749]: E1028 13:21:51.175111 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.175129 kubelet[2749]: W1028 13:21:51.175122 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.175181 kubelet[2749]: E1028 13:21:51.175135 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.175340 kubelet[2749]: E1028 13:21:51.175315 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.175340 kubelet[2749]: W1028 13:21:51.175326 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.175340 kubelet[2749]: E1028 13:21:51.175334 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.175681 kubelet[2749]: E1028 13:21:51.175654 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:51.175681 kubelet[2749]: W1028 13:21:51.175665 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:51.175681 kubelet[2749]: E1028 13:21:51.175672 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:51.990958 containerd[1612]: time="2025-10-28T13:21:51.990902438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:51.991808 containerd[1612]: time="2025-10-28T13:21:51.991755835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:51.994043 containerd[1612]: time="2025-10-28T13:21:51.992877250Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:51.995013 containerd[1612]: time="2025-10-28T13:21:51.994974002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:51.995540 containerd[1612]: time="2025-10-28T13:21:51.995504398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.295810453s" Oct 28 13:21:51.995540 containerd[1612]: time="2025-10-28T13:21:51.995535726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 28 13:21:51.997326 containerd[1612]: time="2025-10-28T13:21:51.997305339Z" level=info msg="CreateContainer within sandbox \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 13:21:52.000631 kubelet[2749]: E1028 13:21:52.000595 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:21:52.006001 containerd[1612]: time="2025-10-28T13:21:52.005969579Z" level=info msg="Container c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:52.013907 containerd[1612]: time="2025-10-28T13:21:52.013869876Z" level=info msg="CreateContainer within sandbox \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5\"" Oct 28 13:21:52.014360 containerd[1612]: time="2025-10-28T13:21:52.014331761Z" level=info msg="StartContainer for \"c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5\"" Oct 28 13:21:52.016193 containerd[1612]: time="2025-10-28T13:21:52.016168489Z" level=info msg="connecting to shim c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5" address="unix:///run/containerd/s/db8f9ee6bca277799a7974ec148e88972b5203a0e1cc371c04fb61289a53089b" protocol=ttrpc version=3 Oct 28 13:21:52.036223 systemd[1]: Started cri-containerd-c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5.scope - libcontainer container c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5. Oct 28 13:21:52.068934 kubelet[2749]: I1028 13:21:52.068882 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 13:21:52.069353 kubelet[2749]: E1028 13:21:52.069305 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:52.077293 containerd[1612]: time="2025-10-28T13:21:52.077169977Z" level=info msg="StartContainer for \"c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5\" returns successfully" Oct 28 13:21:52.087148 systemd[1]: cri-containerd-c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5.scope: Deactivated successfully. Oct 28 13:21:52.089004 containerd[1612]: time="2025-10-28T13:21:52.088963236Z" level=info msg="received exit event container_id:\"c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5\" id:\"c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5\" pid:3454 exited_at:{seconds:1761657712 nanos:88523883}" Oct 28 13:21:52.111999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5e9f8c3d363c1cd02bebfff9446b07b29cba259f0746f5250456ba04e25c2c5-rootfs.mount: Deactivated successfully. Oct 28 13:21:53.072340 kubelet[2749]: E1028 13:21:53.072292 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:53.073143 containerd[1612]: time="2025-10-28T13:21:53.073106660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 13:21:54.000598 kubelet[2749]: E1028 13:21:54.000511 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:21:56.000842 kubelet[2749]: E1028 13:21:56.000782 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:21:56.783124 containerd[1612]: time="2025-10-28T13:21:56.783077734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:56.783945 containerd[1612]: time="2025-10-28T13:21:56.783881592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Oct 28 13:21:56.785102 containerd[1612]: time="2025-10-28T13:21:56.785074746Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:56.786863 containerd[1612]: time="2025-10-28T13:21:56.786835302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:56.787407 containerd[1612]: time="2025-10-28T13:21:56.787366526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.714221325s" Oct 28 13:21:56.787448 containerd[1612]: time="2025-10-28T13:21:56.787403826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 28 13:21:56.789265 containerd[1612]: time="2025-10-28T13:21:56.789243982Z" level=info msg="CreateContainer within sandbox \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 13:21:56.798372 containerd[1612]: time="2025-10-28T13:21:56.798319909Z" level=info msg="Container e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:56.806400 containerd[1612]: time="2025-10-28T13:21:56.806362503Z" level=info msg="CreateContainer within sandbox \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad\"" Oct 28 13:21:56.806951 containerd[1612]: time="2025-10-28T13:21:56.806908334Z" level=info msg="StartContainer for \"e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad\"" Oct 28 13:21:56.808557 containerd[1612]: time="2025-10-28T13:21:56.808531440Z" level=info msg="connecting to shim e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad" address="unix:///run/containerd/s/db8f9ee6bca277799a7974ec148e88972b5203a0e1cc371c04fb61289a53089b" protocol=ttrpc version=3 Oct 28 13:21:56.836194 systemd[1]: Started cri-containerd-e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad.scope - libcontainer container e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad. Oct 28 13:21:57.374550 containerd[1612]: time="2025-10-28T13:21:57.374508655Z" level=info msg="StartContainer for \"e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad\" returns successfully" Oct 28 13:21:57.958600 systemd[1]: cri-containerd-e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad.scope: Deactivated successfully. Oct 28 13:21:57.958979 systemd[1]: cri-containerd-e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad.scope: Consumed 547ms CPU time, 177.3M memory peak, 3.3M read from disk, 171.3M written to disk. Oct 28 13:21:57.959476 containerd[1612]: time="2025-10-28T13:21:57.959298320Z" level=info msg="received exit event container_id:\"e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad\" id:\"e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad\" pid:3515 exited_at:{seconds:1761657717 nanos:959127397}" Oct 28 13:21:57.982246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e99f188f9a191e31f4c84b9cbbd57841c857d3abb53d088e6715a50af949ecad-rootfs.mount: Deactivated successfully. Oct 28 13:21:58.000455 kubelet[2749]: E1028 13:21:58.000395 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:21:58.071610 kubelet[2749]: I1028 13:21:58.071578 2749 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 28 13:21:58.379815 kubelet[2749]: E1028 13:21:58.379655 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:58.568472 systemd[1]: Created slice kubepods-besteffort-pod7657f3fa_3063_4145_84c4_fc4eb45fdecc.slice - libcontainer container kubepods-besteffort-pod7657f3fa_3063_4145_84c4_fc4eb45fdecc.slice. Oct 28 13:21:58.577782 systemd[1]: Created slice kubepods-besteffort-pod9a74fe9b_d7fb_420f_ad6d_4d56d0d183ba.slice - libcontainer container kubepods-besteffort-pod9a74fe9b_d7fb_420f_ad6d_4d56d0d183ba.slice. Oct 28 13:21:58.584906 systemd[1]: Created slice kubepods-burstable-pod1cfdbe1e_1052_434f_b7be_954db8767b55.slice - libcontainer container kubepods-burstable-pod1cfdbe1e_1052_434f_b7be_954db8767b55.slice. Oct 28 13:21:58.591975 systemd[1]: Created slice kubepods-burstable-pod26aeb0f7_3279_40b9_8c1b_6934ea570934.slice - libcontainer container kubepods-burstable-pod26aeb0f7_3279_40b9_8c1b_6934ea570934.slice. Oct 28 13:21:58.599738 systemd[1]: Created slice kubepods-besteffort-pod25e960ce_bdec_4eed_a381_0e4a3ff2145d.slice - libcontainer container kubepods-besteffort-pod25e960ce_bdec_4eed_a381_0e4a3ff2145d.slice. Oct 28 13:21:58.606784 systemd[1]: Created slice kubepods-besteffort-pod0acafd4c_9dce_4e7d_bc78_4db28e85758d.slice - libcontainer container kubepods-besteffort-pod0acafd4c_9dce_4e7d_bc78_4db28e85758d.slice. Oct 28 13:21:58.612625 systemd[1]: Created slice kubepods-besteffort-pod192b16a9_1a1e_4db5_aed4_a301ae461858.slice - libcontainer container kubepods-besteffort-pod192b16a9_1a1e_4db5_aed4_a301ae461858.slice. Oct 28 13:21:58.623619 kubelet[2749]: I1028 13:21:58.623579 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0acafd4c-9dce-4e7d-bc78-4db28e85758d-tigera-ca-bundle\") pod \"calico-kube-controllers-7dd766f59-xz29t\" (UID: \"0acafd4c-9dce-4e7d-bc78-4db28e85758d\") " pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" Oct 28 13:21:58.623764 kubelet[2749]: I1028 13:21:58.623640 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-ca-bundle\") pod \"whisker-54949d87fc-n66fg\" (UID: \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\") " pod="calico-system/whisker-54949d87fc-n66fg" Oct 28 13:21:58.623764 kubelet[2749]: I1028 13:21:58.623666 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rvt\" (UniqueName: \"kubernetes.io/projected/9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba-kube-api-access-g5rvt\") pod \"calico-apiserver-8554b7fc49-wqtwg\" (UID: \"9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba\") " pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" Oct 28 13:21:58.623764 kubelet[2749]: I1028 13:21:58.623684 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9b9\" (UniqueName: \"kubernetes.io/projected/0acafd4c-9dce-4e7d-bc78-4db28e85758d-kube-api-access-zq9b9\") pod \"calico-kube-controllers-7dd766f59-xz29t\" (UID: \"0acafd4c-9dce-4e7d-bc78-4db28e85758d\") " pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" Oct 28 13:21:58.623764 kubelet[2749]: I1028 13:21:58.623705 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/192b16a9-1a1e-4db5-aed4-a301ae461858-calico-apiserver-certs\") pod \"calico-apiserver-8554b7fc49-wxqqx\" (UID: \"192b16a9-1a1e-4db5-aed4-a301ae461858\") " pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" Oct 28 13:21:58.623903 kubelet[2749]: I1028 13:21:58.623817 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcz85\" (UniqueName: \"kubernetes.io/projected/1cfdbe1e-1052-434f-b7be-954db8767b55-kube-api-access-rcz85\") pod \"coredns-668d6bf9bc-l67s2\" (UID: \"1cfdbe1e-1052-434f-b7be-954db8767b55\") " pod="kube-system/coredns-668d6bf9bc-l67s2" Oct 28 13:21:58.623903 kubelet[2749]: I1028 13:21:58.623858 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/25e960ce-bdec-4eed-a381-0e4a3ff2145d-goldmane-key-pair\") pod \"goldmane-666569f655-mtvt9\" (UID: \"25e960ce-bdec-4eed-a381-0e4a3ff2145d\") " pod="calico-system/goldmane-666569f655-mtvt9" Oct 28 13:21:58.623903 kubelet[2749]: I1028 13:21:58.623896 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cfdbe1e-1052-434f-b7be-954db8767b55-config-volume\") pod \"coredns-668d6bf9bc-l67s2\" (UID: \"1cfdbe1e-1052-434f-b7be-954db8767b55\") " pod="kube-system/coredns-668d6bf9bc-l67s2" Oct 28 13:21:58.623991 kubelet[2749]: I1028 13:21:58.623926 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e960ce-bdec-4eed-a381-0e4a3ff2145d-config\") pod \"goldmane-666569f655-mtvt9\" (UID: \"25e960ce-bdec-4eed-a381-0e4a3ff2145d\") " pod="calico-system/goldmane-666569f655-mtvt9" Oct 28 13:21:58.623991 kubelet[2749]: I1028 13:21:58.623962 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7wn\" (UniqueName: \"kubernetes.io/projected/25e960ce-bdec-4eed-a381-0e4a3ff2145d-kube-api-access-6g7wn\") pod \"goldmane-666569f655-mtvt9\" (UID: \"25e960ce-bdec-4eed-a381-0e4a3ff2145d\") " pod="calico-system/goldmane-666569f655-mtvt9" Oct 28 13:21:58.624076 kubelet[2749]: I1028 13:21:58.623998 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba-calico-apiserver-certs\") pod \"calico-apiserver-8554b7fc49-wqtwg\" (UID: \"9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba\") " pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" Oct 28 13:21:58.624076 kubelet[2749]: I1028 13:21:58.624030 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26aeb0f7-3279-40b9-8c1b-6934ea570934-config-volume\") pod \"coredns-668d6bf9bc-4b667\" (UID: \"26aeb0f7-3279-40b9-8c1b-6934ea570934\") " pod="kube-system/coredns-668d6bf9bc-4b667" Oct 28 13:21:58.624138 kubelet[2749]: I1028 13:21:58.624089 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmb8h\" (UniqueName: \"kubernetes.io/projected/26aeb0f7-3279-40b9-8c1b-6934ea570934-kube-api-access-fmb8h\") pod \"coredns-668d6bf9bc-4b667\" (UID: \"26aeb0f7-3279-40b9-8c1b-6934ea570934\") " pod="kube-system/coredns-668d6bf9bc-4b667" Oct 28 13:21:58.624138 kubelet[2749]: I1028 13:21:58.624126 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25e960ce-bdec-4eed-a381-0e4a3ff2145d-goldmane-ca-bundle\") pod \"goldmane-666569f655-mtvt9\" (UID: \"25e960ce-bdec-4eed-a381-0e4a3ff2145d\") " pod="calico-system/goldmane-666569f655-mtvt9" Oct 28 13:21:58.624217 kubelet[2749]: I1028 13:21:58.624171 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l5s\" (UniqueName: \"kubernetes.io/projected/192b16a9-1a1e-4db5-aed4-a301ae461858-kube-api-access-z5l5s\") pod \"calico-apiserver-8554b7fc49-wxqqx\" (UID: \"192b16a9-1a1e-4db5-aed4-a301ae461858\") " pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" Oct 28 13:21:58.624217 kubelet[2749]: I1028 13:21:58.624203 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-backend-key-pair\") pod \"whisker-54949d87fc-n66fg\" (UID: \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\") " pod="calico-system/whisker-54949d87fc-n66fg" Oct 28 13:21:58.624275 kubelet[2749]: I1028 13:21:58.624240 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ktd\" (UniqueName: \"kubernetes.io/projected/7657f3fa-3063-4145-84c4-fc4eb45fdecc-kube-api-access-f2ktd\") pod \"whisker-54949d87fc-n66fg\" (UID: \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\") " pod="calico-system/whisker-54949d87fc-n66fg" Oct 28 13:21:58.874979 containerd[1612]: time="2025-10-28T13:21:58.874930728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54949d87fc-n66fg,Uid:7657f3fa-3063-4145-84c4-fc4eb45fdecc,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:58.882780 containerd[1612]: time="2025-10-28T13:21:58.882738526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wqtwg,Uid:9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:21:58.890304 kubelet[2749]: E1028 13:21:58.889810 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:58.891283 containerd[1612]: time="2025-10-28T13:21:58.891242366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l67s2,Uid:1cfdbe1e-1052-434f-b7be-954db8767b55,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:58.896268 kubelet[2749]: E1028 13:21:58.896236 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:58.901878 containerd[1612]: time="2025-10-28T13:21:58.900065439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4b667,Uid:26aeb0f7-3279-40b9-8c1b-6934ea570934,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:58.920375 containerd[1612]: time="2025-10-28T13:21:58.920328696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wxqqx,Uid:192b16a9-1a1e-4db5-aed4-a301ae461858,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:21:58.920502 containerd[1612]: time="2025-10-28T13:21:58.920436829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mtvt9,Uid:25e960ce-bdec-4eed-a381-0e4a3ff2145d,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:58.920502 containerd[1612]: time="2025-10-28T13:21:58.920483347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd766f59-xz29t,Uid:0acafd4c-9dce-4e7d-bc78-4db28e85758d,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:59.029524 containerd[1612]: time="2025-10-28T13:21:59.029471587Z" level=error msg="Failed to destroy network for sandbox \"0d4205c563d2a2e7761fec2b63d099b0c7500c3c448c3073f83fca2ac6873f0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.032853 systemd[1]: run-netns-cni\x2db93d8088\x2d4daa\x2d759e\x2da2cf\x2d25aee6ee596b.mount: Deactivated successfully. Oct 28 13:21:59.036971 containerd[1612]: time="2025-10-28T13:21:59.034257005Z" level=error msg="Failed to destroy network for sandbox \"5bd0f7e32f6aaa2f55c3a602e1c0b78797a9299443de972d4f095a0ef153ccfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.036816 systemd[1]: run-netns-cni\x2d22e9b6a3\x2dd5ba\x2d6611\x2d5186\x2dd456c5ae8286.mount: Deactivated successfully. Oct 28 13:21:59.038775 containerd[1612]: time="2025-10-28T13:21:59.037704940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l67s2,Uid:1cfdbe1e-1052-434f-b7be-954db8767b55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4205c563d2a2e7761fec2b63d099b0c7500c3c448c3073f83fca2ac6873f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.044982 containerd[1612]: time="2025-10-28T13:21:59.044924712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wqtwg,Uid:9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bd0f7e32f6aaa2f55c3a602e1c0b78797a9299443de972d4f095a0ef153ccfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.054484 containerd[1612]: time="2025-10-28T13:21:59.053521361Z" level=error msg="Failed to destroy network for sandbox \"6550594b5d5a94adacba480e9c9decda0f3106826cac430a2528fdc4b4ef6e23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.059836 containerd[1612]: time="2025-10-28T13:21:59.059788996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54949d87fc-n66fg,Uid:7657f3fa-3063-4145-84c4-fc4eb45fdecc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6550594b5d5a94adacba480e9c9decda0f3106826cac430a2528fdc4b4ef6e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.066624 kubelet[2749]: E1028 13:21:59.066569 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4205c563d2a2e7761fec2b63d099b0c7500c3c448c3073f83fca2ac6873f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.067032 kubelet[2749]: E1028 13:21:59.066622 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bd0f7e32f6aaa2f55c3a602e1c0b78797a9299443de972d4f095a0ef153ccfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.067032 kubelet[2749]: E1028 13:21:59.066652 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4205c563d2a2e7761fec2b63d099b0c7500c3c448c3073f83fca2ac6873f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l67s2" Oct 28 13:21:59.067032 kubelet[2749]: E1028 13:21:59.066672 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bd0f7e32f6aaa2f55c3a602e1c0b78797a9299443de972d4f095a0ef153ccfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" Oct 28 13:21:59.067032 kubelet[2749]: E1028 13:21:59.066695 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bd0f7e32f6aaa2f55c3a602e1c0b78797a9299443de972d4f095a0ef153ccfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" Oct 28 13:21:59.067154 kubelet[2749]: E1028 13:21:59.066703 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6550594b5d5a94adacba480e9c9decda0f3106826cac430a2528fdc4b4ef6e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.067154 kubelet[2749]: E1028 13:21:59.066736 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6550594b5d5a94adacba480e9c9decda0f3106826cac430a2528fdc4b4ef6e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54949d87fc-n66fg" Oct 28 13:21:59.067154 kubelet[2749]: E1028 13:21:59.066749 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6550594b5d5a94adacba480e9c9decda0f3106826cac430a2528fdc4b4ef6e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54949d87fc-n66fg" Oct 28 13:21:59.067227 kubelet[2749]: E1028 13:21:59.066752 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8554b7fc49-wqtwg_calico-apiserver(9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8554b7fc49-wqtwg_calico-apiserver(9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bd0f7e32f6aaa2f55c3a602e1c0b78797a9299443de972d4f095a0ef153ccfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:21:59.067227 kubelet[2749]: E1028 13:21:59.066789 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54949d87fc-n66fg_calico-system(7657f3fa-3063-4145-84c4-fc4eb45fdecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54949d87fc-n66fg_calico-system(7657f3fa-3063-4145-84c4-fc4eb45fdecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6550594b5d5a94adacba480e9c9decda0f3106826cac430a2528fdc4b4ef6e23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54949d87fc-n66fg" podUID="7657f3fa-3063-4145-84c4-fc4eb45fdecc" Oct 28 13:21:59.067227 kubelet[2749]: E1028 13:21:59.066676 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4205c563d2a2e7761fec2b63d099b0c7500c3c448c3073f83fca2ac6873f0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l67s2" Oct 28 13:21:59.067345 kubelet[2749]: E1028 13:21:59.066833 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l67s2_kube-system(1cfdbe1e-1052-434f-b7be-954db8767b55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l67s2_kube-system(1cfdbe1e-1052-434f-b7be-954db8767b55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d4205c563d2a2e7761fec2b63d099b0c7500c3c448c3073f83fca2ac6873f0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l67s2" podUID="1cfdbe1e-1052-434f-b7be-954db8767b55" Oct 28 13:21:59.088356 containerd[1612]: time="2025-10-28T13:21:59.088303114Z" level=error msg="Failed to destroy network for sandbox \"2be5ec8f2f62b89941bb2a9d5fed37cb9e269f394acf607c5275c2c02f14357e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.090813 containerd[1612]: time="2025-10-28T13:21:59.090751412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4b667,Uid:26aeb0f7-3279-40b9-8c1b-6934ea570934,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be5ec8f2f62b89941bb2a9d5fed37cb9e269f394acf607c5275c2c02f14357e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.091032 kubelet[2749]: E1028 13:21:59.090990 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be5ec8f2f62b89941bb2a9d5fed37cb9e269f394acf607c5275c2c02f14357e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.091097 kubelet[2749]: E1028 13:21:59.091063 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be5ec8f2f62b89941bb2a9d5fed37cb9e269f394acf607c5275c2c02f14357e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4b667" Oct 28 13:21:59.091097 kubelet[2749]: E1028 13:21:59.091082 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be5ec8f2f62b89941bb2a9d5fed37cb9e269f394acf607c5275c2c02f14357e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4b667" Oct 28 13:21:59.091147 kubelet[2749]: E1028 13:21:59.091130 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4b667_kube-system(26aeb0f7-3279-40b9-8c1b-6934ea570934)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4b667_kube-system(26aeb0f7-3279-40b9-8c1b-6934ea570934)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2be5ec8f2f62b89941bb2a9d5fed37cb9e269f394acf607c5275c2c02f14357e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4b667" podUID="26aeb0f7-3279-40b9-8c1b-6934ea570934" Oct 28 13:21:59.095774 containerd[1612]: time="2025-10-28T13:21:59.095718874Z" level=error msg="Failed to destroy network for sandbox \"9bf1e3c7a1994588cb9b03601a85000232b80beebf1eee5ee73864666e460da8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.098127 containerd[1612]: time="2025-10-28T13:21:59.098043250Z" level=error msg="Failed to destroy network for sandbox \"c11d6baf06331ceef53aca2795f4efeefb7ba50fe8bfd30f4ededc18e6f6f668\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.098407 containerd[1612]: time="2025-10-28T13:21:59.098297199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd766f59-xz29t,Uid:0acafd4c-9dce-4e7d-bc78-4db28e85758d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf1e3c7a1994588cb9b03601a85000232b80beebf1eee5ee73864666e460da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.098608 kubelet[2749]: E1028 13:21:59.098562 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf1e3c7a1994588cb9b03601a85000232b80beebf1eee5ee73864666e460da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.098687 kubelet[2749]: E1028 13:21:59.098628 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf1e3c7a1994588cb9b03601a85000232b80beebf1eee5ee73864666e460da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" Oct 28 13:21:59.098687 kubelet[2749]: E1028 13:21:59.098647 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf1e3c7a1994588cb9b03601a85000232b80beebf1eee5ee73864666e460da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" Oct 28 13:21:59.098764 kubelet[2749]: E1028 13:21:59.098685 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dd766f59-xz29t_calico-system(0acafd4c-9dce-4e7d-bc78-4db28e85758d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dd766f59-xz29t_calico-system(0acafd4c-9dce-4e7d-bc78-4db28e85758d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bf1e3c7a1994588cb9b03601a85000232b80beebf1eee5ee73864666e460da8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:21:59.100578 containerd[1612]: time="2025-10-28T13:21:59.100538267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mtvt9,Uid:25e960ce-bdec-4eed-a381-0e4a3ff2145d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c11d6baf06331ceef53aca2795f4efeefb7ba50fe8bfd30f4ededc18e6f6f668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.100778 kubelet[2749]: E1028 13:21:59.100752 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c11d6baf06331ceef53aca2795f4efeefb7ba50fe8bfd30f4ededc18e6f6f668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.100911 kubelet[2749]: E1028 13:21:59.100809 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c11d6baf06331ceef53aca2795f4efeefb7ba50fe8bfd30f4ededc18e6f6f668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mtvt9" Oct 28 13:21:59.100911 kubelet[2749]: E1028 13:21:59.100824 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c11d6baf06331ceef53aca2795f4efeefb7ba50fe8bfd30f4ededc18e6f6f668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mtvt9" Oct 28 13:21:59.100911 kubelet[2749]: E1028 13:21:59.100862 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mtvt9_calico-system(25e960ce-bdec-4eed-a381-0e4a3ff2145d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mtvt9_calico-system(25e960ce-bdec-4eed-a381-0e4a3ff2145d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c11d6baf06331ceef53aca2795f4efeefb7ba50fe8bfd30f4ededc18e6f6f668\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mtvt9" podUID="25e960ce-bdec-4eed-a381-0e4a3ff2145d" Oct 28 13:21:59.106186 containerd[1612]: time="2025-10-28T13:21:59.106141750Z" level=error msg="Failed to destroy network for sandbox \"2b126692fe3c25430220a41d4cd4831fe6c258bc316ad7bc2abff9ea2763f601\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.108095 containerd[1612]: time="2025-10-28T13:21:59.108039460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wxqqx,Uid:192b16a9-1a1e-4db5-aed4-a301ae461858,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b126692fe3c25430220a41d4cd4831fe6c258bc316ad7bc2abff9ea2763f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.108297 kubelet[2749]: E1028 13:21:59.108257 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b126692fe3c25430220a41d4cd4831fe6c258bc316ad7bc2abff9ea2763f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:59.108297 kubelet[2749]: E1028 13:21:59.108297 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b126692fe3c25430220a41d4cd4831fe6c258bc316ad7bc2abff9ea2763f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" Oct 28 13:21:59.108420 kubelet[2749]: E1028 13:21:59.108312 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b126692fe3c25430220a41d4cd4831fe6c258bc316ad7bc2abff9ea2763f601\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" Oct 28 13:21:59.108420 kubelet[2749]: E1028 13:21:59.108342 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8554b7fc49-wxqqx_calico-apiserver(192b16a9-1a1e-4db5-aed4-a301ae461858)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8554b7fc49-wxqqx_calico-apiserver(192b16a9-1a1e-4db5-aed4-a301ae461858)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b126692fe3c25430220a41d4cd4831fe6c258bc316ad7bc2abff9ea2763f601\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:21:59.383777 kubelet[2749]: E1028 13:21:59.383745 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:59.384458 containerd[1612]: time="2025-10-28T13:21:59.384409593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 13:21:59.982935 systemd[1]: run-netns-cni\x2dabe1a957\x2d647f\x2d55d6\x2d0261\x2d6449ff92c112.mount: Deactivated successfully. Oct 28 13:21:59.983045 systemd[1]: run-netns-cni\x2d7af679f2\x2d637e\x2d763c\x2d3296\x2d8af9f284c3fd.mount: Deactivated successfully. Oct 28 13:21:59.983127 systemd[1]: run-netns-cni\x2df6f261cf\x2d9681\x2dc737\x2dee4c\x2d0e76952e10f1.mount: Deactivated successfully. Oct 28 13:21:59.983193 systemd[1]: run-netns-cni\x2d9c771de1\x2d081e\x2d137d\x2da044\x2dd05f4a7e8ea8.mount: Deactivated successfully. Oct 28 13:21:59.983256 systemd[1]: run-netns-cni\x2d5a8f47c1\x2dd0c8\x2db850\x2db7bb\x2d59f60cde7196.mount: Deactivated successfully. Oct 28 13:22:00.005816 systemd[1]: Created slice kubepods-besteffort-podf768eb5b_b675_4026_8f12_83b3103b89d1.slice - libcontainer container kubepods-besteffort-podf768eb5b_b675_4026_8f12_83b3103b89d1.slice. Oct 28 13:22:00.009107 containerd[1612]: time="2025-10-28T13:22:00.008621740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cbn9,Uid:f768eb5b-b675-4026-8f12-83b3103b89d1,Namespace:calico-system,Attempt:0,}" Oct 28 13:22:00.062482 containerd[1612]: time="2025-10-28T13:22:00.062412668Z" level=error msg="Failed to destroy network for sandbox \"b3d9f08026451d7f767076b132b5e19922a584e9615e80ee89e1bb2bc5368439\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:22:00.064569 systemd[1]: run-netns-cni\x2d6cb7b2c8\x2deb58\x2d3b4b\x2d93a7\x2d8709022e0bdc.mount: Deactivated successfully. Oct 28 13:22:00.065400 containerd[1612]: time="2025-10-28T13:22:00.065344166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cbn9,Uid:f768eb5b-b675-4026-8f12-83b3103b89d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d9f08026451d7f767076b132b5e19922a584e9615e80ee89e1bb2bc5368439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:22:00.066305 kubelet[2749]: E1028 13:22:00.066254 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d9f08026451d7f767076b132b5e19922a584e9615e80ee89e1bb2bc5368439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:22:00.066370 kubelet[2749]: E1028 13:22:00.066327 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d9f08026451d7f767076b132b5e19922a584e9615e80ee89e1bb2bc5368439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:22:00.066370 kubelet[2749]: E1028 13:22:00.066351 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d9f08026451d7f767076b132b5e19922a584e9615e80ee89e1bb2bc5368439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4cbn9" Oct 28 13:22:00.066432 kubelet[2749]: E1028 13:22:00.066410 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3d9f08026451d7f767076b132b5e19922a584e9615e80ee89e1bb2bc5368439\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:22:04.846314 kubelet[2749]: I1028 13:22:04.846252 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 13:22:04.848814 kubelet[2749]: E1028 13:22:04.847862 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:05.395045 kubelet[2749]: E1028 13:22:05.395009 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:06.397011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430036531.mount: Deactivated successfully. Oct 28 13:22:07.397355 containerd[1612]: time="2025-10-28T13:22:07.397286374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:22:07.398296 containerd[1612]: time="2025-10-28T13:22:07.398259846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Oct 28 13:22:07.399596 containerd[1612]: time="2025-10-28T13:22:07.399559943Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:22:07.401548 containerd[1612]: time="2025-10-28T13:22:07.401506285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:22:07.402064 containerd[1612]: time="2025-10-28T13:22:07.402028619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.017580082s" Oct 28 13:22:07.402096 containerd[1612]: time="2025-10-28T13:22:07.402076929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 28 13:22:07.409697 containerd[1612]: time="2025-10-28T13:22:07.409646445Z" level=info msg="CreateContainer within sandbox \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 13:22:07.599201 containerd[1612]: time="2025-10-28T13:22:07.598765943Z" level=info msg="Container 1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:22:07.623174 containerd[1612]: time="2025-10-28T13:22:07.623123743Z" level=info msg="CreateContainer within sandbox \"64d254a1d8e42f8651015f6cfc26f04e12196588830c30d727a518b59d002beb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc\"" Oct 28 13:22:07.623645 containerd[1612]: time="2025-10-28T13:22:07.623624266Z" level=info msg="StartContainer for \"1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc\"" Oct 28 13:22:07.625255 containerd[1612]: time="2025-10-28T13:22:07.625211132Z" level=info msg="connecting to shim 1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc" address="unix:///run/containerd/s/db8f9ee6bca277799a7974ec148e88972b5203a0e1cc371c04fb61289a53089b" protocol=ttrpc version=3 Oct 28 13:22:07.709183 systemd[1]: Started cri-containerd-1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc.scope - libcontainer container 1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc. Oct 28 13:22:07.789212 containerd[1612]: time="2025-10-28T13:22:07.789175799Z" level=info msg="StartContainer for \"1474f430afb0e6f1d6ba4422ce5d0de390cbeabc3ca02b3f02519a2d42ffbcfc\" returns successfully" Oct 28 13:22:07.858860 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 13:22:07.860173 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 13:22:08.081807 kubelet[2749]: I1028 13:22:08.081662 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2ktd\" (UniqueName: \"kubernetes.io/projected/7657f3fa-3063-4145-84c4-fc4eb45fdecc-kube-api-access-f2ktd\") pod \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\" (UID: \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\") " Oct 28 13:22:08.081807 kubelet[2749]: I1028 13:22:08.081717 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-backend-key-pair\") pod \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\" (UID: \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\") " Oct 28 13:22:08.081807 kubelet[2749]: I1028 13:22:08.081742 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-ca-bundle\") pod \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\" (UID: \"7657f3fa-3063-4145-84c4-fc4eb45fdecc\") " Oct 28 13:22:08.082354 kubelet[2749]: I1028 13:22:08.082222 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7657f3fa-3063-4145-84c4-fc4eb45fdecc" (UID: "7657f3fa-3063-4145-84c4-fc4eb45fdecc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 13:22:08.086153 kubelet[2749]: I1028 13:22:08.086095 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7657f3fa-3063-4145-84c4-fc4eb45fdecc-kube-api-access-f2ktd" (OuterVolumeSpecName: "kube-api-access-f2ktd") pod "7657f3fa-3063-4145-84c4-fc4eb45fdecc" (UID: "7657f3fa-3063-4145-84c4-fc4eb45fdecc"). InnerVolumeSpecName "kube-api-access-f2ktd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 13:22:08.087034 kubelet[2749]: I1028 13:22:08.087002 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7657f3fa-3063-4145-84c4-fc4eb45fdecc" (UID: "7657f3fa-3063-4145-84c4-fc4eb45fdecc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 13:22:08.182754 kubelet[2749]: I1028 13:22:08.182697 2749 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 13:22:08.182754 kubelet[2749]: I1028 13:22:08.182731 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f2ktd\" (UniqueName: \"kubernetes.io/projected/7657f3fa-3063-4145-84c4-fc4eb45fdecc-kube-api-access-f2ktd\") on node \"localhost\" DevicePath \"\"" Oct 28 13:22:08.182754 kubelet[2749]: I1028 13:22:08.182741 2749 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7657f3fa-3063-4145-84c4-fc4eb45fdecc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 13:22:08.406120 kubelet[2749]: E1028 13:22:08.404653 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:08.407459 systemd[1]: var-lib-kubelet-pods-7657f3fa\x2d3063\x2d4145\x2d84c4\x2dfc4eb45fdecc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df2ktd.mount: Deactivated successfully. Oct 28 13:22:08.407572 systemd[1]: var-lib-kubelet-pods-7657f3fa\x2d3063\x2d4145\x2d84c4\x2dfc4eb45fdecc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 13:22:08.411033 systemd[1]: Removed slice kubepods-besteffort-pod7657f3fa_3063_4145_84c4_fc4eb45fdecc.slice - libcontainer container kubepods-besteffort-pod7657f3fa_3063_4145_84c4_fc4eb45fdecc.slice. Oct 28 13:22:08.988796 kubelet[2749]: I1028 13:22:08.988727 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rn62f" podStartSLOduration=2.217380471 podStartE2EDuration="20.988708927s" podCreationTimestamp="2025-10-28 13:21:48 +0000 UTC" firstStartedPulling="2025-10-28 13:21:48.631150691 +0000 UTC m=+24.818647822" lastFinishedPulling="2025-10-28 13:22:07.402479147 +0000 UTC m=+43.589976278" observedRunningTime="2025-10-28 13:22:08.976542466 +0000 UTC m=+45.164039687" watchObservedRunningTime="2025-10-28 13:22:08.988708927 +0000 UTC m=+45.176206058" Oct 28 13:22:09.023965 systemd[1]: Created slice kubepods-besteffort-podc96b0190_3699_44b4_be4c_b9b392bdd84b.slice - libcontainer container kubepods-besteffort-podc96b0190_3699_44b4_be4c_b9b392bdd84b.slice. Oct 28 13:22:09.089296 kubelet[2749]: I1028 13:22:09.089248 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7sr\" (UniqueName: \"kubernetes.io/projected/c96b0190-3699-44b4-be4c-b9b392bdd84b-kube-api-access-mk7sr\") pod \"whisker-6b78c7cbf-jfw2j\" (UID: \"c96b0190-3699-44b4-be4c-b9b392bdd84b\") " pod="calico-system/whisker-6b78c7cbf-jfw2j" Oct 28 13:22:09.089296 kubelet[2749]: I1028 13:22:09.089295 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96b0190-3699-44b4-be4c-b9b392bdd84b-whisker-ca-bundle\") pod \"whisker-6b78c7cbf-jfw2j\" (UID: \"c96b0190-3699-44b4-be4c-b9b392bdd84b\") " pod="calico-system/whisker-6b78c7cbf-jfw2j" Oct 28 13:22:09.089296 kubelet[2749]: I1028 13:22:09.089315 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c96b0190-3699-44b4-be4c-b9b392bdd84b-whisker-backend-key-pair\") pod \"whisker-6b78c7cbf-jfw2j\" (UID: \"c96b0190-3699-44b4-be4c-b9b392bdd84b\") " pod="calico-system/whisker-6b78c7cbf-jfw2j" Oct 28 13:22:09.330603 containerd[1612]: time="2025-10-28T13:22:09.330480699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b78c7cbf-jfw2j,Uid:c96b0190-3699-44b4-be4c-b9b392bdd84b,Namespace:calico-system,Attempt:0,}" Oct 28 13:22:09.405927 kubelet[2749]: E1028 13:22:09.405895 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:09.499140 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:50234.service - OpenSSH per-connection server daemon (10.0.0.1:50234). Oct 28 13:22:09.579718 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 50234 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:09.581359 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:09.587272 systemd-logind[1598]: New session 8 of user core. Oct 28 13:22:09.595269 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 13:22:09.618302 systemd-networkd[1515]: cali032b3c848e3: Link UP Oct 28 13:22:09.619402 systemd-networkd[1515]: cali032b3c848e3: Gained carrier Oct 28 13:22:09.633787 systemd-networkd[1515]: vxlan.calico: Link UP Oct 28 13:22:09.633800 systemd-networkd[1515]: vxlan.calico: Gained carrier Oct 28 13:22:09.638665 containerd[1612]: 2025-10-28 13:22:09.486 [INFO][4067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0 whisker-6b78c7cbf- calico-system c96b0190-3699-44b4-be4c-b9b392bdd84b 917 0 2025-10-28 13:22:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b78c7cbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6b78c7cbf-jfw2j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali032b3c848e3 [] [] }} ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-" Oct 28 13:22:09.638665 containerd[1612]: 2025-10-28 13:22:09.486 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.638665 containerd[1612]: 2025-10-28 13:22:09.565 [INFO][4099] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" HandleID="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Workload="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.566 [INFO][4099] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" HandleID="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Workload="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6b78c7cbf-jfw2j", "timestamp":"2025-10-28 13:22:09.565581494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.566 [INFO][4099] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.566 [INFO][4099] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.566 [INFO][4099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.575 [INFO][4099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" host="localhost" Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.582 [INFO][4099] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.586 [INFO][4099] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.589 [INFO][4099] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.591 [INFO][4099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:09.638862 containerd[1612]: 2025-10-28 13:22:09.591 [INFO][4099] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" host="localhost" Oct 28 13:22:09.639917 containerd[1612]: 2025-10-28 13:22:09.592 [INFO][4099] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6 Oct 28 13:22:09.639917 containerd[1612]: 2025-10-28 13:22:09.597 [INFO][4099] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" host="localhost" Oct 28 13:22:09.639917 containerd[1612]: 2025-10-28 13:22:09.604 [INFO][4099] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" host="localhost" Oct 28 13:22:09.639917 containerd[1612]: 2025-10-28 13:22:09.604 [INFO][4099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" host="localhost" Oct 28 13:22:09.639917 containerd[1612]: 2025-10-28 13:22:09.604 [INFO][4099] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:09.639917 containerd[1612]: 2025-10-28 13:22:09.604 [INFO][4099] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" HandleID="k8s-pod-network.62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Workload="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.640220 containerd[1612]: 2025-10-28 13:22:09.608 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0", GenerateName:"whisker-6b78c7cbf-", Namespace:"calico-system", SelfLink:"", UID:"c96b0190-3699-44b4-be4c-b9b392bdd84b", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b78c7cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6b78c7cbf-jfw2j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali032b3c848e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:09.640220 containerd[1612]: 2025-10-28 13:22:09.608 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.640309 containerd[1612]: 2025-10-28 13:22:09.608 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali032b3c848e3 ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.640309 containerd[1612]: 2025-10-28 13:22:09.622 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.640404 containerd[1612]: 2025-10-28 13:22:09.622 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0", GenerateName:"whisker-6b78c7cbf-", Namespace:"calico-system", SelfLink:"", UID:"c96b0190-3699-44b4-be4c-b9b392bdd84b", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b78c7cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6", Pod:"whisker-6b78c7cbf-jfw2j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali032b3c848e3", MAC:"2a:80:cf:a5:34:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:09.640455 containerd[1612]: 2025-10-28 13:22:09.633 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" Namespace="calico-system" Pod="whisker-6b78c7cbf-jfw2j" WorkloadEndpoint="localhost-k8s-whisker--6b78c7cbf--jfw2j-eth0" Oct 28 13:22:09.750407 sshd[4126]: Connection closed by 10.0.0.1 port 50234 Oct 28 13:22:09.750704 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:09.756964 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:50234.service: Deactivated successfully. Oct 28 13:22:09.759303 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 13:22:09.771235 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Oct 28 13:22:09.772719 systemd-logind[1598]: Removed session 8. Oct 28 13:22:09.784930 containerd[1612]: time="2025-10-28T13:22:09.784865986Z" level=info msg="connecting to shim 62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6" address="unix:///run/containerd/s/b27a3548c942c4236c7b7daec8a7031d9d825eec4b4aa78da0446819e8d8f659" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:09.811212 systemd[1]: Started cri-containerd-62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6.scope - libcontainer container 62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6. Oct 28 13:22:09.824814 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:09.857677 containerd[1612]: time="2025-10-28T13:22:09.857572838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b78c7cbf-jfw2j,Uid:c96b0190-3699-44b4-be4c-b9b392bdd84b,Namespace:calico-system,Attempt:0,} returns sandbox id \"62fea71cb8133f01b16f25d4d532ea9534c963ab0602c20b7c6d1059c65675a6\"" Oct 28 13:22:09.860977 containerd[1612]: time="2025-10-28T13:22:09.860936414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 13:22:10.001762 containerd[1612]: time="2025-10-28T13:22:10.001183136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wxqqx,Uid:192b16a9-1a1e-4db5-aed4-a301ae461858,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:22:10.003543 kubelet[2749]: I1028 13:22:10.003502 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7657f3fa-3063-4145-84c4-fc4eb45fdecc" path="/var/lib/kubelet/pods/7657f3fa-3063-4145-84c4-fc4eb45fdecc/volumes" Oct 28 13:22:10.098872 systemd-networkd[1515]: cali484b6d14ef9: Link UP Oct 28 13:22:10.099351 systemd-networkd[1515]: cali484b6d14ef9: Gained carrier Oct 28 13:22:10.113734 containerd[1612]: 2025-10-28 13:22:10.040 [INFO][4266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0 calico-apiserver-8554b7fc49- calico-apiserver 192b16a9-1a1e-4db5-aed4-a301ae461858 828 0 2025-10-28 13:21:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8554b7fc49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8554b7fc49-wxqqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali484b6d14ef9 [] [] }} ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-" Oct 28 13:22:10.113734 containerd[1612]: 2025-10-28 13:22:10.040 [INFO][4266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.113734 containerd[1612]: 2025-10-28 13:22:10.065 [INFO][4282] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" HandleID="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Workload="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.065 [INFO][4282] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" HandleID="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Workload="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8554b7fc49-wxqqx", "timestamp":"2025-10-28 13:22:10.065702732 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.065 [INFO][4282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.065 [INFO][4282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.065 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.072 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" host="localhost" Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.076 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.079 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.081 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.083 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:10.113916 containerd[1612]: 2025-10-28 13:22:10.083 [INFO][4282] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" host="localhost" Oct 28 13:22:10.114237 containerd[1612]: 2025-10-28 13:22:10.084 [INFO][4282] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c Oct 28 13:22:10.114237 containerd[1612]: 2025-10-28 13:22:10.087 [INFO][4282] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" host="localhost" Oct 28 13:22:10.114237 containerd[1612]: 2025-10-28 13:22:10.093 [INFO][4282] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" host="localhost" Oct 28 13:22:10.114237 containerd[1612]: 2025-10-28 13:22:10.093 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" host="localhost" Oct 28 13:22:10.114237 containerd[1612]: 2025-10-28 13:22:10.093 [INFO][4282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:10.114237 containerd[1612]: 2025-10-28 13:22:10.093 [INFO][4282] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" HandleID="k8s-pod-network.d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Workload="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.114352 containerd[1612]: 2025-10-28 13:22:10.096 [INFO][4266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0", GenerateName:"calico-apiserver-8554b7fc49-", Namespace:"calico-apiserver", SelfLink:"", UID:"192b16a9-1a1e-4db5-aed4-a301ae461858", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8554b7fc49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8554b7fc49-wxqqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali484b6d14ef9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:10.114409 containerd[1612]: 2025-10-28 13:22:10.096 [INFO][4266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.114409 containerd[1612]: 2025-10-28 13:22:10.097 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali484b6d14ef9 ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.114409 containerd[1612]: 2025-10-28 13:22:10.099 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.114473 containerd[1612]: 2025-10-28 13:22:10.100 [INFO][4266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0", GenerateName:"calico-apiserver-8554b7fc49-", Namespace:"calico-apiserver", SelfLink:"", UID:"192b16a9-1a1e-4db5-aed4-a301ae461858", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8554b7fc49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c", Pod:"calico-apiserver-8554b7fc49-wxqqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali484b6d14ef9", MAC:"1e:40:72:5e:6c:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:10.114520 containerd[1612]: 2025-10-28 13:22:10.109 [INFO][4266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wxqqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wxqqx-eth0" Oct 28 13:22:10.135243 containerd[1612]: time="2025-10-28T13:22:10.135181834Z" level=info msg="connecting to shim d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c" address="unix:///run/containerd/s/649e1ea875d4a75f948d7f991e9cbdaf8bac2ccd3c34ef1e2d9fdeb0fa867b95" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:10.168349 systemd[1]: Started cri-containerd-d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c.scope - libcontainer container d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c. Oct 28 13:22:10.182468 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:10.214106 containerd[1612]: time="2025-10-28T13:22:10.214066837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wxqqx,Uid:192b16a9-1a1e-4db5-aed4-a301ae461858,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d76432493f7352e1b5bb777ae9fd8e21b86c2cb0e13603fe781c8383d3b0917c\"" Oct 28 13:22:10.247852 containerd[1612]: time="2025-10-28T13:22:10.247784241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:10.270472 containerd[1612]: time="2025-10-28T13:22:10.270362244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 13:22:10.270472 containerd[1612]: time="2025-10-28T13:22:10.270448666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:10.270669 kubelet[2749]: E1028 13:22:10.270619 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:10.270669 kubelet[2749]: E1028 13:22:10.270666 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:10.271231 containerd[1612]: time="2025-10-28T13:22:10.271173871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:10.277715 kubelet[2749]: E1028 13:22:10.277661 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a5fa0fa36934229a92f964f9d8c2a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk7sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b78c7cbf-jfw2j_calico-system(c96b0190-3699-44b4-be4c-b9b392bdd84b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:10.648699 containerd[1612]: time="2025-10-28T13:22:10.648626282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:10.650016 containerd[1612]: time="2025-10-28T13:22:10.649948098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:10.650154 containerd[1612]: time="2025-10-28T13:22:10.649988564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:10.650339 kubelet[2749]: E1028 13:22:10.650297 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:10.650405 kubelet[2749]: E1028 13:22:10.650343 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:10.650775 containerd[1612]: time="2025-10-28T13:22:10.650706174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 13:22:10.650815 kubelet[2749]: E1028 13:22:10.650549 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5l5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8554b7fc49-wxqqx_calico-apiserver(192b16a9-1a1e-4db5-aed4-a301ae461858): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:10.651865 kubelet[2749]: E1028 13:22:10.651827 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:22:11.001459 kubelet[2749]: E1028 13:22:11.001425 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:11.001927 containerd[1612]: time="2025-10-28T13:22:11.001875875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4b667,Uid:26aeb0f7-3279-40b9-8c1b-6934ea570934,Namespace:kube-system,Attempt:0,}" Oct 28 13:22:11.001927 containerd[1612]: time="2025-10-28T13:22:11.001904519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cbn9,Uid:f768eb5b-b675-4026-8f12-83b3103b89d1,Namespace:calico-system,Attempt:0,}" Oct 28 13:22:11.088168 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Oct 28 13:22:11.109951 systemd-networkd[1515]: caliab40a1dfd86: Link UP Oct 28 13:22:11.112288 systemd-networkd[1515]: caliab40a1dfd86: Gained carrier Oct 28 13:22:11.124513 containerd[1612]: 2025-10-28 13:22:11.049 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4b667-eth0 coredns-668d6bf9bc- kube-system 26aeb0f7-3279-40b9-8c1b-6934ea570934 824 0 2025-10-28 13:21:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4b667 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab40a1dfd86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-" Oct 28 13:22:11.124513 containerd[1612]: 2025-10-28 13:22:11.049 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.124513 containerd[1612]: 2025-10-28 13:22:11.074 [INFO][4378] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" HandleID="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Workload="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.075 [INFO][4378] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" HandleID="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Workload="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000442530), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4b667", "timestamp":"2025-10-28 13:22:11.074914215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.075 [INFO][4378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.075 [INFO][4378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.075 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.082 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" host="localhost" Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.086 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.089 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.091 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.093 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:11.124730 containerd[1612]: 2025-10-28 13:22:11.093 [INFO][4378] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" host="localhost" Oct 28 13:22:11.124995 containerd[1612]: 2025-10-28 13:22:11.094 [INFO][4378] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00 Oct 28 13:22:11.124995 containerd[1612]: 2025-10-28 13:22:11.098 [INFO][4378] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" host="localhost" Oct 28 13:22:11.124995 containerd[1612]: 2025-10-28 13:22:11.103 [INFO][4378] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" host="localhost" Oct 28 13:22:11.124995 containerd[1612]: 2025-10-28 13:22:11.104 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" host="localhost" Oct 28 13:22:11.124995 containerd[1612]: 2025-10-28 13:22:11.104 [INFO][4378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:11.124995 containerd[1612]: 2025-10-28 13:22:11.104 [INFO][4378] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" HandleID="k8s-pod-network.416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Workload="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.125186 containerd[1612]: 2025-10-28 13:22:11.107 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4b667-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26aeb0f7-3279-40b9-8c1b-6934ea570934", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4b667", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab40a1dfd86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:11.125272 containerd[1612]: 2025-10-28 13:22:11.107 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.125272 containerd[1612]: 2025-10-28 13:22:11.107 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab40a1dfd86 ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.125272 containerd[1612]: 2025-10-28 13:22:11.112 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.125379 containerd[1612]: 2025-10-28 13:22:11.112 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4b667-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26aeb0f7-3279-40b9-8c1b-6934ea570934", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00", Pod:"coredns-668d6bf9bc-4b667", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab40a1dfd86", MAC:"66:1e:0a:b5:68:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:11.125379 containerd[1612]: 2025-10-28 13:22:11.122 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" Namespace="kube-system" Pod="coredns-668d6bf9bc-4b667" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4b667-eth0" Oct 28 13:22:11.150245 containerd[1612]: time="2025-10-28T13:22:11.150178521Z" level=info msg="connecting to shim 416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00" address="unix:///run/containerd/s/748e2daa808a8a7a955c436f9038a89127cf523690ae4535028afaa6e137cac1" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:11.152211 systemd-networkd[1515]: cali032b3c848e3: Gained IPv6LL Oct 28 13:22:11.184724 systemd[1]: Started cri-containerd-416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00.scope - libcontainer container 416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00. Oct 28 13:22:11.200720 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:11.214089 systemd-networkd[1515]: califc226648d42: Link UP Oct 28 13:22:11.216000 systemd-networkd[1515]: califc226648d42: Gained carrier Oct 28 13:22:11.217347 systemd-networkd[1515]: cali484b6d14ef9: Gained IPv6LL Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.048 [INFO][4353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4cbn9-eth0 csi-node-driver- calico-system f768eb5b-b675-4026-8f12-83b3103b89d1 712 0 2025-10-28 13:21:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4cbn9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califc226648d42 [] [] }} ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.048 [INFO][4353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.075 [INFO][4376] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" HandleID="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Workload="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.075 [INFO][4376] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" HandleID="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Workload="localhost-k8s-csi--node--driver--4cbn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7d50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4cbn9", "timestamp":"2025-10-28 13:22:11.075817534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.076 [INFO][4376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.104 [INFO][4376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.104 [INFO][4376] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.183 [INFO][4376] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.188 [INFO][4376] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.191 [INFO][4376] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.193 [INFO][4376] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.194 [INFO][4376] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.194 [INFO][4376] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.195 [INFO][4376] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7 Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.199 [INFO][4376] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.205 [INFO][4376] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.206 [INFO][4376] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" host="localhost" Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.206 [INFO][4376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:11.234926 containerd[1612]: 2025-10-28 13:22:11.206 [INFO][4376] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" HandleID="k8s-pod-network.70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Workload="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.235635 containerd[1612]: 2025-10-28 13:22:11.211 [INFO][4353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4cbn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f768eb5b-b675-4026-8f12-83b3103b89d1", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4cbn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califc226648d42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:11.235635 containerd[1612]: 2025-10-28 13:22:11.211 [INFO][4353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.235635 containerd[1612]: 2025-10-28 13:22:11.211 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc226648d42 ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.235635 containerd[1612]: 2025-10-28 13:22:11.216 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.235635 containerd[1612]: 2025-10-28 13:22:11.217 [INFO][4353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4cbn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f768eb5b-b675-4026-8f12-83b3103b89d1", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7", Pod:"csi-node-driver-4cbn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califc226648d42", MAC:"12:e5:ba:ad:55:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:11.235635 containerd[1612]: 2025-10-28 13:22:11.228 [INFO][4353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" Namespace="calico-system" Pod="csi-node-driver-4cbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4cbn9-eth0" Oct 28 13:22:11.245956 containerd[1612]: time="2025-10-28T13:22:11.245830614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4b667,Uid:26aeb0f7-3279-40b9-8c1b-6934ea570934,Namespace:kube-system,Attempt:0,} returns sandbox id \"416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00\"" Oct 28 13:22:11.246643 kubelet[2749]: E1028 13:22:11.246600 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:11.249446 containerd[1612]: time="2025-10-28T13:22:11.249421937Z" level=info msg="CreateContainer within sandbox \"416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 13:22:11.263377 containerd[1612]: time="2025-10-28T13:22:11.262804301Z" level=info msg="Container a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:22:11.268747 containerd[1612]: time="2025-10-28T13:22:11.268718522Z" level=info msg="CreateContainer within sandbox \"416f9d9c45c80e97b75a4d288e6991bc7e4d682cb97886ea59a0857eb98f4e00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c\"" Oct 28 13:22:11.269356 containerd[1612]: time="2025-10-28T13:22:11.269299465Z" level=info msg="connecting to shim 70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7" address="unix:///run/containerd/s/aa9379b175a19669985cff79432dc3e12fd55883ac66ec16d78c28fdcd4049db" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:11.269395 containerd[1612]: time="2025-10-28T13:22:11.269357403Z" level=info msg="StartContainer for \"a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c\"" Oct 28 13:22:11.270648 containerd[1612]: time="2025-10-28T13:22:11.270588008Z" level=info msg="connecting to shim a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c" address="unix:///run/containerd/s/748e2daa808a8a7a955c436f9038a89127cf523690ae4535028afaa6e137cac1" protocol=ttrpc version=3 Oct 28 13:22:11.292191 systemd[1]: Started cri-containerd-70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7.scope - libcontainer container 70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7. Oct 28 13:22:11.295369 systemd[1]: Started cri-containerd-a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c.scope - libcontainer container a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c. Oct 28 13:22:11.307188 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:11.322925 containerd[1612]: time="2025-10-28T13:22:11.322881399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cbn9,Uid:f768eb5b-b675-4026-8f12-83b3103b89d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"70d7fb4acb1d1568af881cd1b7c2eb2413c432dd6b58e29c0d172bde9670a6d7\"" Oct 28 13:22:11.330420 containerd[1612]: time="2025-10-28T13:22:11.330391642Z" level=info msg="StartContainer for \"a94703ef1ca36d06d48d901388bbe9e3549931662d8692ac7276419eea4d425c\" returns successfully" Oct 28 13:22:11.353372 containerd[1612]: time="2025-10-28T13:22:11.353331597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:11.388550 containerd[1612]: time="2025-10-28T13:22:11.388486352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 13:22:11.388743 containerd[1612]: time="2025-10-28T13:22:11.388584206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:11.388807 kubelet[2749]: E1028 13:22:11.388744 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:11.388807 kubelet[2749]: E1028 13:22:11.388795 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:11.389429 kubelet[2749]: E1028 13:22:11.388978 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk7sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b78c7cbf-jfw2j_calico-system(c96b0190-3699-44b4-be4c-b9b392bdd84b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:11.389584 containerd[1612]: time="2025-10-28T13:22:11.389262651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 13:22:11.390972 kubelet[2749]: E1028 13:22:11.390933 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b78c7cbf-jfw2j" podUID="c96b0190-3699-44b4-be4c-b9b392bdd84b" Oct 28 13:22:11.414890 kubelet[2749]: E1028 13:22:11.414773 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:11.416931 kubelet[2749]: E1028 13:22:11.416760 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:22:11.417069 kubelet[2749]: E1028 13:22:11.416981 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b78c7cbf-jfw2j" podUID="c96b0190-3699-44b4-be4c-b9b392bdd84b" Oct 28 13:22:11.466079 kubelet[2749]: I1028 13:22:11.465997 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4b667" podStartSLOduration=39.465602951 podStartE2EDuration="39.465602951s" podCreationTimestamp="2025-10-28 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:22:11.449513917 +0000 UTC m=+47.637011048" watchObservedRunningTime="2025-10-28 13:22:11.465602951 +0000 UTC m=+47.653100082" Oct 28 13:22:11.838000 containerd[1612]: time="2025-10-28T13:22:11.837936872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:11.839185 containerd[1612]: time="2025-10-28T13:22:11.839155514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 13:22:11.839278 containerd[1612]: time="2025-10-28T13:22:11.839218111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:11.839447 kubelet[2749]: E1028 13:22:11.839400 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:11.839524 kubelet[2749]: E1028 13:22:11.839455 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:11.839674 kubelet[2749]: E1028 13:22:11.839627 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n225x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:11.841586 containerd[1612]: time="2025-10-28T13:22:11.841536170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 13:22:12.278470 containerd[1612]: time="2025-10-28T13:22:12.278421169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:12.279611 containerd[1612]: time="2025-10-28T13:22:12.279578566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 13:22:12.279690 containerd[1612]: time="2025-10-28T13:22:12.279640011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:12.279815 kubelet[2749]: E1028 13:22:12.279778 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:12.279873 kubelet[2749]: E1028 13:22:12.279826 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:12.279973 kubelet[2749]: E1028 13:22:12.279922 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n225x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:12.281265 kubelet[2749]: E1028 13:22:12.281223 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:22:12.304196 systemd-networkd[1515]: califc226648d42: Gained IPv6LL Oct 28 13:22:12.417888 kubelet[2749]: E1028 13:22:12.417859 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:12.418713 kubelet[2749]: E1028 13:22:12.418666 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:22:12.880348 systemd-networkd[1515]: caliab40a1dfd86: Gained IPv6LL Oct 28 13:22:13.001006 containerd[1612]: time="2025-10-28T13:22:13.000946614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd766f59-xz29t,Uid:0acafd4c-9dce-4e7d-bc78-4db28e85758d,Namespace:calico-system,Attempt:0,}" Oct 28 13:22:13.001434 containerd[1612]: time="2025-10-28T13:22:13.000945902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wqtwg,Uid:9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:22:13.129003 systemd-networkd[1515]: calib37135523dc: Link UP Oct 28 13:22:13.129541 systemd-networkd[1515]: calib37135523dc: Gained carrier Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.050 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0 calico-kube-controllers-7dd766f59- calico-system 0acafd4c-9dce-4e7d-bc78-4db28e85758d 826 0 2025-10-28 13:21:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dd766f59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dd766f59-xz29t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib37135523dc [] [] }} ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.050 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.084 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" HandleID="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Workload="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.084 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" HandleID="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Workload="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dd766f59-xz29t", "timestamp":"2025-10-28 13:22:13.084042662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.084 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.084 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.084 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.090 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.106 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.109 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.110 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.112 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.112 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.113 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2 Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.116 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.122 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.122 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" host="localhost" Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.122 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:13.146426 containerd[1612]: 2025-10-28 13:22:13.122 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" HandleID="k8s-pod-network.5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Workload="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.146945 containerd[1612]: 2025-10-28 13:22:13.124 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0", GenerateName:"calico-kube-controllers-7dd766f59-", Namespace:"calico-system", SelfLink:"", UID:"0acafd4c-9dce-4e7d-bc78-4db28e85758d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dd766f59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dd766f59-xz29t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib37135523dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:13.146945 containerd[1612]: 2025-10-28 13:22:13.125 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.146945 containerd[1612]: 2025-10-28 13:22:13.125 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib37135523dc ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.146945 containerd[1612]: 2025-10-28 13:22:13.128 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.146945 containerd[1612]: 2025-10-28 13:22:13.130 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0", GenerateName:"calico-kube-controllers-7dd766f59-", Namespace:"calico-system", SelfLink:"", UID:"0acafd4c-9dce-4e7d-bc78-4db28e85758d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dd766f59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2", Pod:"calico-kube-controllers-7dd766f59-xz29t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib37135523dc", MAC:"76:cd:ab:6a:85:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:13.146945 containerd[1612]: 2025-10-28 13:22:13.139 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" Namespace="calico-system" Pod="calico-kube-controllers-7dd766f59-xz29t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dd766f59--xz29t-eth0" Oct 28 13:22:13.168095 containerd[1612]: time="2025-10-28T13:22:13.168029315Z" level=info msg="connecting to shim 5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2" address="unix:///run/containerd/s/6ca363a26053d97d9115b4e5fadc0a9aabbbbacf241f06e6eb4aef9fde0b304e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:13.194244 systemd[1]: Started cri-containerd-5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2.scope - libcontainer container 5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2. Oct 28 13:22:13.217331 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:13.266938 containerd[1612]: time="2025-10-28T13:22:13.266901218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dd766f59-xz29t,Uid:0acafd4c-9dce-4e7d-bc78-4db28e85758d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c20633d5ecf8dcf25d056867caf5cff8751c853a84f5bf74a10b843ef6e46c2\"" Oct 28 13:22:13.268381 containerd[1612]: time="2025-10-28T13:22:13.268281933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 13:22:13.393845 systemd-networkd[1515]: cali487fa22a0ad: Link UP Oct 28 13:22:13.394615 systemd-networkd[1515]: cali487fa22a0ad: Gained carrier Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.052 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0 calico-apiserver-8554b7fc49- calico-apiserver 9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba 825 0 2025-10-28 13:21:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8554b7fc49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8554b7fc49-wqtwg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali487fa22a0ad [] [] }} ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.053 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.085 [INFO][4583] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" HandleID="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Workload="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.086 [INFO][4583] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" HandleID="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Workload="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8554b7fc49-wqtwg", "timestamp":"2025-10-28 13:22:13.085971168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.086 [INFO][4583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.122 [INFO][4583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.122 [INFO][4583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.366 [INFO][4583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.369 [INFO][4583] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.372 [INFO][4583] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.374 [INFO][4583] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.376 [INFO][4583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.376 [INFO][4583] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.377 [INFO][4583] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3 Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.382 [INFO][4583] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.387 [INFO][4583] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.387 [INFO][4583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" host="localhost" Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.387 [INFO][4583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:13.406392 containerd[1612]: 2025-10-28 13:22:13.387 [INFO][4583] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" HandleID="k8s-pod-network.dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Workload="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.407355 containerd[1612]: 2025-10-28 13:22:13.390 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0", GenerateName:"calico-apiserver-8554b7fc49-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8554b7fc49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8554b7fc49-wqtwg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali487fa22a0ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:13.407355 containerd[1612]: 2025-10-28 13:22:13.391 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.407355 containerd[1612]: 2025-10-28 13:22:13.391 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali487fa22a0ad ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.407355 containerd[1612]: 2025-10-28 13:22:13.394 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.407355 containerd[1612]: 2025-10-28 13:22:13.395 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0", GenerateName:"calico-apiserver-8554b7fc49-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8554b7fc49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3", Pod:"calico-apiserver-8554b7fc49-wqtwg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali487fa22a0ad", MAC:"0a:54:a0:63:ea:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:13.407355 containerd[1612]: 2025-10-28 13:22:13.403 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" Namespace="calico-apiserver" Pod="calico-apiserver-8554b7fc49-wqtwg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8554b7fc49--wqtwg-eth0" Oct 28 13:22:13.420752 kubelet[2749]: E1028 13:22:13.420695 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:13.430621 containerd[1612]: time="2025-10-28T13:22:13.430568879Z" level=info msg="connecting to shim dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3" address="unix:///run/containerd/s/e399110843664812ebf935e4b87fb0a4ca20ef5c2a7a872db9b7ac26f273faf4" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:13.460201 systemd[1]: Started cri-containerd-dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3.scope - libcontainer container dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3. Oct 28 13:22:13.472590 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:13.507649 containerd[1612]: time="2025-10-28T13:22:13.507611820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8554b7fc49-wqtwg,Uid:9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dfbffbe4d542701bce388e04e5c15e6d957ab9e5b5f3830e13f025b1c2578cc3\"" Oct 28 13:22:13.610852 containerd[1612]: time="2025-10-28T13:22:13.610783866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:13.612086 containerd[1612]: time="2025-10-28T13:22:13.612024608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 13:22:13.612164 containerd[1612]: time="2025-10-28T13:22:13.612105360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:13.612357 kubelet[2749]: E1028 13:22:13.612310 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:13.612403 kubelet[2749]: E1028 13:22:13.612364 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:13.612675 containerd[1612]: time="2025-10-28T13:22:13.612658640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:13.612724 kubelet[2749]: E1028 13:22:13.612660 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zq9b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dd766f59-xz29t_calico-system(0acafd4c-9dce-4e7d-bc78-4db28e85758d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:13.614584 kubelet[2749]: E1028 13:22:13.614555 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:22:13.947318 containerd[1612]: time="2025-10-28T13:22:13.947257857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:13.948606 containerd[1612]: time="2025-10-28T13:22:13.948534637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:13.948606 containerd[1612]: time="2025-10-28T13:22:13.948574202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:13.948807 kubelet[2749]: E1028 13:22:13.948752 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:13.948879 kubelet[2749]: E1028 13:22:13.948805 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:13.948978 kubelet[2749]: E1028 13:22:13.948928 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5rvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8554b7fc49-wqtwg_calico-apiserver(9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:13.950173 kubelet[2749]: E1028 13:22:13.950124 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:22:14.001381 kubelet[2749]: E1028 13:22:14.001336 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:14.001801 containerd[1612]: time="2025-10-28T13:22:14.001762341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mtvt9,Uid:25e960ce-bdec-4eed-a381-0e4a3ff2145d,Namespace:calico-system,Attempt:0,}" Oct 28 13:22:14.002270 containerd[1612]: time="2025-10-28T13:22:14.001831380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l67s2,Uid:1cfdbe1e-1052-434f-b7be-954db8767b55,Namespace:kube-system,Attempt:0,}" Oct 28 13:22:14.127155 systemd-networkd[1515]: cali72aca026bd9: Link UP Oct 28 13:22:14.127841 systemd-networkd[1515]: cali72aca026bd9: Gained carrier Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.045 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mtvt9-eth0 goldmane-666569f655- calico-system 25e960ce-bdec-4eed-a381-0e4a3ff2145d 827 0 2025-10-28 13:21:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mtvt9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali72aca026bd9 [] [] }} ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.045 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.076 [INFO][4737] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" HandleID="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Workload="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.076 [INFO][4737] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" HandleID="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Workload="localhost-k8s-goldmane--666569f655--mtvt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mtvt9", "timestamp":"2025-10-28 13:22:14.076805069 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.077 [INFO][4737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.077 [INFO][4737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.077 [INFO][4737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.083 [INFO][4737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.090 [INFO][4737] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.094 [INFO][4737] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.096 [INFO][4737] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.098 [INFO][4737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.098 [INFO][4737] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.099 [INFO][4737] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800 Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.114 [INFO][4737] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.119 [INFO][4737] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.119 [INFO][4737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" host="localhost" Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.119 [INFO][4737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:14.142819 containerd[1612]: 2025-10-28 13:22:14.119 [INFO][4737] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" HandleID="k8s-pod-network.673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Workload="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.143348 containerd[1612]: 2025-10-28 13:22:14.122 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mtvt9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"25e960ce-bdec-4eed-a381-0e4a3ff2145d", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mtvt9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali72aca026bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:14.143348 containerd[1612]: 2025-10-28 13:22:14.122 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.143348 containerd[1612]: 2025-10-28 13:22:14.122 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72aca026bd9 ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.143348 containerd[1612]: 2025-10-28 13:22:14.127 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.143348 containerd[1612]: 2025-10-28 13:22:14.128 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mtvt9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"25e960ce-bdec-4eed-a381-0e4a3ff2145d", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800", Pod:"goldmane-666569f655-mtvt9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali72aca026bd9", MAC:"a2:13:ce:74:a0:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:14.143348 containerd[1612]: 2025-10-28 13:22:14.138 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" Namespace="calico-system" Pod="goldmane-666569f655-mtvt9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mtvt9-eth0" Oct 28 13:22:14.166119 containerd[1612]: time="2025-10-28T13:22:14.166021256Z" level=info msg="connecting to shim 673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800" address="unix:///run/containerd/s/8b39aa9030c0d5df40ea5f9afa47419474facd76c635c982aa0af866e289324b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:14.193358 systemd[1]: Started cri-containerd-673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800.scope - libcontainer container 673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800. Oct 28 13:22:14.208738 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:14.220343 systemd-networkd[1515]: cali729601b78ae: Link UP Oct 28 13:22:14.221638 systemd-networkd[1515]: cali729601b78ae: Gained carrier Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.050 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--l67s2-eth0 coredns-668d6bf9bc- kube-system 1cfdbe1e-1052-434f-b7be-954db8767b55 823 0 2025-10-28 13:21:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-l67s2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali729601b78ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.050 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.081 [INFO][4743] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" HandleID="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Workload="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.081 [INFO][4743] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" HandleID="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Workload="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021f720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-l67s2", "timestamp":"2025-10-28 13:22:14.081286803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.081 [INFO][4743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.119 [INFO][4743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.119 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.184 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.191 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.196 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.198 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.200 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.200 [INFO][4743] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.201 [INFO][4743] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803 Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.205 [INFO][4743] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.212 [INFO][4743] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.212 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" host="localhost" Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.212 [INFO][4743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:22:14.239771 containerd[1612]: 2025-10-28 13:22:14.212 [INFO][4743] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" HandleID="k8s-pod-network.e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Workload="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.240443 containerd[1612]: 2025-10-28 13:22:14.217 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l67s2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1cfdbe1e-1052-434f-b7be-954db8767b55", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-l67s2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali729601b78ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:14.240443 containerd[1612]: 2025-10-28 13:22:14.217 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.240443 containerd[1612]: 2025-10-28 13:22:14.218 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali729601b78ae ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.240443 containerd[1612]: 2025-10-28 13:22:14.222 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.240443 containerd[1612]: 2025-10-28 13:22:14.223 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l67s2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1cfdbe1e-1052-434f-b7be-954db8767b55", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803", Pod:"coredns-668d6bf9bc-l67s2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali729601b78ae", MAC:"96:ce:d6:a8:73:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:22:14.240443 containerd[1612]: 2025-10-28 13:22:14.234 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" Namespace="kube-system" Pod="coredns-668d6bf9bc-l67s2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l67s2-eth0" Oct 28 13:22:14.246790 containerd[1612]: time="2025-10-28T13:22:14.246744300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mtvt9,Uid:25e960ce-bdec-4eed-a381-0e4a3ff2145d,Namespace:calico-system,Attempt:0,} returns sandbox id \"673b02c92a5d16e34bb43f75e70a7a4f5f55a1c0f28c138619d87ec0b6be8800\"" Oct 28 13:22:14.248897 containerd[1612]: time="2025-10-28T13:22:14.248863873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 13:22:14.264813 containerd[1612]: time="2025-10-28T13:22:14.264634503Z" level=info msg="connecting to shim e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803" address="unix:///run/containerd/s/f952df36e32e3e1527d1fba0719d4f5eb99d181fec9d293a95762eb208bc18ff" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:22:14.294245 systemd[1]: Started cri-containerd-e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803.scope - libcontainer container e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803. Oct 28 13:22:14.308081 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:22:14.338653 containerd[1612]: time="2025-10-28T13:22:14.338618501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l67s2,Uid:1cfdbe1e-1052-434f-b7be-954db8767b55,Namespace:kube-system,Attempt:0,} returns sandbox id \"e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803\"" Oct 28 13:22:14.339438 kubelet[2749]: E1028 13:22:14.339411 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:14.342111 containerd[1612]: time="2025-10-28T13:22:14.341350486Z" level=info msg="CreateContainer within sandbox \"e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 13:22:14.351637 containerd[1612]: time="2025-10-28T13:22:14.351590291Z" level=info msg="Container 1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:22:14.357787 containerd[1612]: time="2025-10-28T13:22:14.357750218Z" level=info msg="CreateContainer within sandbox \"e98cf41c09df01892c9cc92838775ef915f49d22ad01edfade4b7ff2d3eb2803\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb\"" Oct 28 13:22:14.358303 containerd[1612]: time="2025-10-28T13:22:14.358269775Z" level=info msg="StartContainer for \"1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb\"" Oct 28 13:22:14.359004 containerd[1612]: time="2025-10-28T13:22:14.358966865Z" level=info msg="connecting to shim 1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb" address="unix:///run/containerd/s/f952df36e32e3e1527d1fba0719d4f5eb99d181fec9d293a95762eb208bc18ff" protocol=ttrpc version=3 Oct 28 13:22:14.380194 systemd[1]: Started cri-containerd-1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb.scope - libcontainer container 1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb. Oct 28 13:22:14.410886 containerd[1612]: time="2025-10-28T13:22:14.410842482Z" level=info msg="StartContainer for \"1f9529fae67e87c1bd1ab877a9e5677091af9ea51991bf505e0b52e409caf8fb\" returns successfully" Oct 28 13:22:14.427090 kubelet[2749]: E1028 13:22:14.425854 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:22:14.430475 kubelet[2749]: E1028 13:22:14.430444 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:14.431237 kubelet[2749]: E1028 13:22:14.431205 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:22:14.470756 kubelet[2749]: I1028 13:22:14.470026 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l67s2" podStartSLOduration=42.469998001 podStartE2EDuration="42.469998001s" podCreationTimestamp="2025-10-28 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:22:14.457199988 +0000 UTC m=+50.644697119" watchObservedRunningTime="2025-10-28 13:22:14.469998001 +0000 UTC m=+50.657495122" Oct 28 13:22:14.561898 containerd[1612]: time="2025-10-28T13:22:14.561831216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:14.563122 containerd[1612]: time="2025-10-28T13:22:14.563073000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 13:22:14.563193 containerd[1612]: time="2025-10-28T13:22:14.563149744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:14.563362 kubelet[2749]: E1028 13:22:14.563306 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:14.563416 kubelet[2749]: E1028 13:22:14.563364 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:14.563537 kubelet[2749]: E1028 13:22:14.563490 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g7wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mtvt9_calico-system(25e960ce-bdec-4eed-a381-0e4a3ff2145d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:14.564745 kubelet[2749]: E1028 13:22:14.564690 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mtvt9" podUID="25e960ce-bdec-4eed-a381-0e4a3ff2145d" Oct 28 13:22:14.736295 systemd-networkd[1515]: cali487fa22a0ad: Gained IPv6LL Oct 28 13:22:14.772574 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:37110.service - OpenSSH per-connection server daemon (10.0.0.1:37110). Oct 28 13:22:14.860220 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 37110 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:14.862119 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:14.867328 systemd-logind[1598]: New session 9 of user core. Oct 28 13:22:14.880263 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 13:22:14.929584 systemd-networkd[1515]: calib37135523dc: Gained IPv6LL Oct 28 13:22:15.018184 sshd[4910]: Connection closed by 10.0.0.1 port 37110 Oct 28 13:22:15.016457 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:15.021403 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:37110.service: Deactivated successfully. Oct 28 13:22:15.023443 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 13:22:15.025175 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Oct 28 13:22:15.026235 systemd-logind[1598]: Removed session 9. Oct 28 13:22:15.435663 kubelet[2749]: E1028 13:22:15.434587 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:15.436427 kubelet[2749]: E1028 13:22:15.436391 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:22:15.437241 kubelet[2749]: E1028 13:22:15.437199 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mtvt9" podUID="25e960ce-bdec-4eed-a381-0e4a3ff2145d" Oct 28 13:22:15.440337 systemd-networkd[1515]: cali729601b78ae: Gained IPv6LL Oct 28 13:22:15.632279 systemd-networkd[1515]: cali72aca026bd9: Gained IPv6LL Oct 28 13:22:16.436584 kubelet[2749]: E1028 13:22:16.436549 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:17.438044 kubelet[2749]: E1028 13:22:17.438011 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:20.035135 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:37116.service - OpenSSH per-connection server daemon (10.0.0.1:37116). Oct 28 13:22:20.083246 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 37116 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:20.084760 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:20.088906 systemd-logind[1598]: New session 10 of user core. Oct 28 13:22:20.100171 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 13:22:20.254459 sshd[4947]: Connection closed by 10.0.0.1 port 37116 Oct 28 13:22:20.254889 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:20.259621 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:37116.service: Deactivated successfully. Oct 28 13:22:20.261596 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 13:22:20.262552 systemd-logind[1598]: Session 10 logged out. Waiting for processes to exit. Oct 28 13:22:20.263559 systemd-logind[1598]: Removed session 10. Oct 28 13:22:24.001735 containerd[1612]: time="2025-10-28T13:22:24.001528408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:24.837189 containerd[1612]: time="2025-10-28T13:22:24.837130668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:24.838997 containerd[1612]: time="2025-10-28T13:22:24.838886774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:24.838997 containerd[1612]: time="2025-10-28T13:22:24.838923223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:24.839193 kubelet[2749]: E1028 13:22:24.839132 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:24.839580 kubelet[2749]: E1028 13:22:24.839192 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:24.839580 kubelet[2749]: E1028 13:22:24.839320 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5l5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8554b7fc49-wxqqx_calico-apiserver(192b16a9-1a1e-4db5-aed4-a301ae461858): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:24.840584 kubelet[2749]: E1028 13:22:24.840536 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:22:25.270845 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:58770.service - OpenSSH per-connection server daemon (10.0.0.1:58770). Oct 28 13:22:25.329034 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 58770 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:25.330418 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:25.334569 systemd-logind[1598]: New session 11 of user core. Oct 28 13:22:25.342193 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 13:22:25.462362 sshd[4968]: Connection closed by 10.0.0.1 port 58770 Oct 28 13:22:25.462684 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:25.474886 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:58770.service: Deactivated successfully. Oct 28 13:22:25.476782 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 13:22:25.477619 systemd-logind[1598]: Session 11 logged out. Waiting for processes to exit. Oct 28 13:22:25.480247 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:58782.service - OpenSSH per-connection server daemon (10.0.0.1:58782). Oct 28 13:22:25.480938 systemd-logind[1598]: Removed session 11. Oct 28 13:22:25.529608 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 58782 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:25.531258 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:25.535427 systemd-logind[1598]: New session 12 of user core. Oct 28 13:22:25.548180 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 13:22:25.688236 sshd[4986]: Connection closed by 10.0.0.1 port 58782 Oct 28 13:22:25.690875 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:25.700061 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:58782.service: Deactivated successfully. Oct 28 13:22:25.702887 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 13:22:25.704853 systemd-logind[1598]: Session 12 logged out. Waiting for processes to exit. Oct 28 13:22:25.707756 systemd-logind[1598]: Removed session 12. Oct 28 13:22:25.708511 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Oct 28 13:22:25.759664 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:25.761092 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:25.765593 systemd-logind[1598]: New session 13 of user core. Oct 28 13:22:25.776199 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 13:22:25.901442 sshd[5000]: Connection closed by 10.0.0.1 port 58788 Oct 28 13:22:25.902350 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:25.905683 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:58788.service: Deactivated successfully. Oct 28 13:22:25.907654 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 13:22:25.910132 systemd-logind[1598]: Session 13 logged out. Waiting for processes to exit. Oct 28 13:22:25.911634 systemd-logind[1598]: Removed session 13. Oct 28 13:22:26.001461 containerd[1612]: time="2025-10-28T13:22:26.001423891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 13:22:26.560619 containerd[1612]: time="2025-10-28T13:22:26.560562461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:26.628524 containerd[1612]: time="2025-10-28T13:22:26.628445150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:26.628524 containerd[1612]: time="2025-10-28T13:22:26.628499813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 13:22:26.628790 kubelet[2749]: E1028 13:22:26.628738 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:26.629163 kubelet[2749]: E1028 13:22:26.628794 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:26.629163 kubelet[2749]: E1028 13:22:26.628913 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n225x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:26.630931 containerd[1612]: time="2025-10-28T13:22:26.630876173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 13:22:27.134431 containerd[1612]: time="2025-10-28T13:22:27.134374275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:27.135996 containerd[1612]: time="2025-10-28T13:22:27.135957857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 13:22:27.136063 containerd[1612]: time="2025-10-28T13:22:27.136029130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:27.136216 kubelet[2749]: E1028 13:22:27.136154 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:27.136216 kubelet[2749]: E1028 13:22:27.136214 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:27.136464 kubelet[2749]: E1028 13:22:27.136423 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n225x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:27.136673 containerd[1612]: time="2025-10-28T13:22:27.136608538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 13:22:27.137993 kubelet[2749]: E1028 13:22:27.137933 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:22:27.505204 containerd[1612]: time="2025-10-28T13:22:27.505144871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:27.506375 containerd[1612]: time="2025-10-28T13:22:27.506339483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 13:22:27.506438 containerd[1612]: time="2025-10-28T13:22:27.506379939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:27.506545 kubelet[2749]: E1028 13:22:27.506497 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:27.506599 kubelet[2749]: E1028 13:22:27.506545 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:27.507207 containerd[1612]: time="2025-10-28T13:22:27.506808353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 13:22:27.507266 kubelet[2749]: E1028 13:22:27.506842 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zq9b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dd766f59-xz29t_calico-system(0acafd4c-9dce-4e7d-bc78-4db28e85758d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:27.508296 kubelet[2749]: E1028 13:22:27.508232 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:22:27.875643 containerd[1612]: time="2025-10-28T13:22:27.875499706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:27.876796 containerd[1612]: time="2025-10-28T13:22:27.876737380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 13:22:27.876848 containerd[1612]: time="2025-10-28T13:22:27.876756846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:27.877067 kubelet[2749]: E1028 13:22:27.876998 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:27.877376 kubelet[2749]: E1028 13:22:27.877080 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:27.877376 kubelet[2749]: E1028 13:22:27.877203 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a5fa0fa36934229a92f964f9d8c2a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk7sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b78c7cbf-jfw2j_calico-system(c96b0190-3699-44b4-be4c-b9b392bdd84b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:27.879901 containerd[1612]: time="2025-10-28T13:22:27.879846164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 13:22:28.201164 containerd[1612]: time="2025-10-28T13:22:28.201105778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:28.202692 containerd[1612]: time="2025-10-28T13:22:28.202652982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 13:22:28.203069 containerd[1612]: time="2025-10-28T13:22:28.202757177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:28.203108 kubelet[2749]: E1028 13:22:28.202863 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:28.203108 kubelet[2749]: E1028 13:22:28.202907 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:28.203279 kubelet[2749]: E1028 13:22:28.203222 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk7sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b78c7cbf-jfw2j_calico-system(c96b0190-3699-44b4-be4c-b9b392bdd84b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:28.203364 containerd[1612]: time="2025-10-28T13:22:28.203286982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:28.204930 kubelet[2749]: E1028 13:22:28.204824 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b78c7cbf-jfw2j" podUID="c96b0190-3699-44b4-be4c-b9b392bdd84b" Oct 28 13:22:28.561144 containerd[1612]: time="2025-10-28T13:22:28.560974409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:28.562467 containerd[1612]: time="2025-10-28T13:22:28.562425372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:28.562523 containerd[1612]: time="2025-10-28T13:22:28.562453805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:28.562689 kubelet[2749]: E1028 13:22:28.562635 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:28.562733 kubelet[2749]: E1028 13:22:28.562690 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:28.562891 kubelet[2749]: E1028 13:22:28.562829 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5rvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8554b7fc49-wqtwg_calico-apiserver(9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:28.564063 kubelet[2749]: E1028 13:22:28.564020 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:22:30.001892 containerd[1612]: time="2025-10-28T13:22:30.001577448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 13:22:30.366312 containerd[1612]: time="2025-10-28T13:22:30.366187837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:30.367524 containerd[1612]: time="2025-10-28T13:22:30.367489317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 13:22:30.367590 containerd[1612]: time="2025-10-28T13:22:30.367521057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:30.367736 kubelet[2749]: E1028 13:22:30.367692 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:30.368090 kubelet[2749]: E1028 13:22:30.367744 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:30.368090 kubelet[2749]: E1028 13:22:30.367877 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g7wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mtvt9_calico-system(25e960ce-bdec-4eed-a381-0e4a3ff2145d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:30.369117 kubelet[2749]: E1028 13:22:30.369081 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mtvt9" podUID="25e960ce-bdec-4eed-a381-0e4a3ff2145d" Oct 28 13:22:30.917748 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:58794.service - OpenSSH per-connection server daemon (10.0.0.1:58794). Oct 28 13:22:30.979780 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 58794 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:30.981040 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:30.984957 systemd-logind[1598]: New session 14 of user core. Oct 28 13:22:31.000174 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 13:22:31.109888 sshd[5027]: Connection closed by 10.0.0.1 port 58794 Oct 28 13:22:31.110202 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:31.114271 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:58794.service: Deactivated successfully. Oct 28 13:22:31.116379 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 13:22:31.117210 systemd-logind[1598]: Session 14 logged out. Waiting for processes to exit. Oct 28 13:22:31.118205 systemd-logind[1598]: Removed session 14. Oct 28 13:22:36.122912 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:33782.service - OpenSSH per-connection server daemon (10.0.0.1:33782). Oct 28 13:22:36.174014 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 33782 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:36.175504 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:36.179855 systemd-logind[1598]: New session 15 of user core. Oct 28 13:22:36.187168 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 13:22:36.296428 sshd[5048]: Connection closed by 10.0.0.1 port 33782 Oct 28 13:22:36.296708 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:36.299779 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:33782.service: Deactivated successfully. Oct 28 13:22:36.301756 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 13:22:36.303437 systemd-logind[1598]: Session 15 logged out. Waiting for processes to exit. Oct 28 13:22:36.304564 systemd-logind[1598]: Removed session 15. Oct 28 13:22:37.000776 kubelet[2749]: E1028 13:22:37.000714 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:22:38.001449 kubelet[2749]: E1028 13:22:38.001415 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:22:39.486528 kubelet[2749]: E1028 13:22:39.486492 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:40.001951 kubelet[2749]: E1028 13:22:40.001819 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:22:40.001951 kubelet[2749]: E1028 13:22:40.001850 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b78c7cbf-jfw2j" podUID="c96b0190-3699-44b4-be4c-b9b392bdd84b" Oct 28 13:22:41.310622 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:33798.service - OpenSSH per-connection server daemon (10.0.0.1:33798). Oct 28 13:22:41.382910 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 33798 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:41.384535 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:41.389380 systemd-logind[1598]: New session 16 of user core. Oct 28 13:22:41.399249 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 13:22:41.518983 sshd[5090]: Connection closed by 10.0.0.1 port 33798 Oct 28 13:22:41.519300 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:41.524717 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:33798.service: Deactivated successfully. Oct 28 13:22:41.526728 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 13:22:41.527626 systemd-logind[1598]: Session 16 logged out. Waiting for processes to exit. Oct 28 13:22:41.528659 systemd-logind[1598]: Removed session 16. Oct 28 13:22:43.001317 kubelet[2749]: E1028 13:22:43.001224 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:43.002285 kubelet[2749]: E1028 13:22:43.002220 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:22:44.003072 kubelet[2749]: E1028 13:22:44.003025 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:45.001209 kubelet[2749]: E1028 13:22:45.001133 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mtvt9" podUID="25e960ce-bdec-4eed-a381-0e4a3ff2145d" Oct 28 13:22:46.543762 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:42782.service - OpenSSH per-connection server daemon (10.0.0.1:42782). Oct 28 13:22:46.595995 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 42782 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:46.597214 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:46.601420 systemd-logind[1598]: New session 17 of user core. Oct 28 13:22:46.612177 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 13:22:46.718482 sshd[5110]: Connection closed by 10.0.0.1 port 42782 Oct 28 13:22:46.718783 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:46.731739 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:42782.service: Deactivated successfully. Oct 28 13:22:46.733604 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 13:22:46.734367 systemd-logind[1598]: Session 17 logged out. Waiting for processes to exit. Oct 28 13:22:46.737126 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:42796.service - OpenSSH per-connection server daemon (10.0.0.1:42796). Oct 28 13:22:46.737789 systemd-logind[1598]: Removed session 17. Oct 28 13:22:46.784858 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 42796 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:46.786558 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:46.790987 systemd-logind[1598]: New session 18 of user core. Oct 28 13:22:46.801201 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 13:22:47.080813 sshd[5127]: Connection closed by 10.0.0.1 port 42796 Oct 28 13:22:47.081191 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:47.091860 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:42796.service: Deactivated successfully. Oct 28 13:22:47.093879 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 13:22:47.094779 systemd-logind[1598]: Session 18 logged out. Waiting for processes to exit. Oct 28 13:22:47.097835 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:42812.service - OpenSSH per-connection server daemon (10.0.0.1:42812). Oct 28 13:22:47.098556 systemd-logind[1598]: Removed session 18. Oct 28 13:22:47.166659 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 42812 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:47.167891 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:47.172388 systemd-logind[1598]: New session 19 of user core. Oct 28 13:22:47.185284 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 13:22:47.773659 sshd[5142]: Connection closed by 10.0.0.1 port 42812 Oct 28 13:22:47.776093 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:47.785182 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:42812.service: Deactivated successfully. Oct 28 13:22:47.787269 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 13:22:47.788303 systemd-logind[1598]: Session 19 logged out. Waiting for processes to exit. Oct 28 13:22:47.791648 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:42814.service - OpenSSH per-connection server daemon (10.0.0.1:42814). Oct 28 13:22:47.793279 systemd-logind[1598]: Removed session 19. Oct 28 13:22:47.836179 sshd[5162]: Accepted publickey for core from 10.0.0.1 port 42814 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:47.837882 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:47.842398 systemd-logind[1598]: New session 20 of user core. Oct 28 13:22:47.852208 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 13:22:48.067804 sshd[5165]: Connection closed by 10.0.0.1 port 42814 Oct 28 13:22:48.068127 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:48.076933 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:42814.service: Deactivated successfully. Oct 28 13:22:48.078964 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 13:22:48.080097 systemd-logind[1598]: Session 20 logged out. Waiting for processes to exit. Oct 28 13:22:48.083683 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:42816.service - OpenSSH per-connection server daemon (10.0.0.1:42816). Oct 28 13:22:48.084376 systemd-logind[1598]: Removed session 20. Oct 28 13:22:48.137949 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 42816 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:48.139697 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:48.144347 systemd-logind[1598]: New session 21 of user core. Oct 28 13:22:48.154209 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 13:22:48.306295 sshd[5179]: Connection closed by 10.0.0.1 port 42816 Oct 28 13:22:48.306606 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:48.311392 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:42816.service: Deactivated successfully. Oct 28 13:22:48.313430 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 13:22:48.314185 systemd-logind[1598]: Session 21 logged out. Waiting for processes to exit. Oct 28 13:22:48.315739 systemd-logind[1598]: Removed session 21. Oct 28 13:22:50.005112 containerd[1612]: time="2025-10-28T13:22:50.005068697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 13:22:50.359693 containerd[1612]: time="2025-10-28T13:22:50.359555918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:50.360937 containerd[1612]: time="2025-10-28T13:22:50.360860918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 13:22:50.360937 containerd[1612]: time="2025-10-28T13:22:50.360921323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:50.361194 kubelet[2749]: E1028 13:22:50.361138 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:50.361535 kubelet[2749]: E1028 13:22:50.361197 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:50.361535 kubelet[2749]: E1028 13:22:50.361366 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zq9b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dd766f59-xz29t_calico-system(0acafd4c-9dce-4e7d-bc78-4db28e85758d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:50.362612 kubelet[2749]: E1028 13:22:50.362551 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:22:51.001295 containerd[1612]: time="2025-10-28T13:22:51.001259693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:51.396375 containerd[1612]: time="2025-10-28T13:22:51.396239977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:51.397535 containerd[1612]: time="2025-10-28T13:22:51.397475924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:51.397589 containerd[1612]: time="2025-10-28T13:22:51.397552911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:51.397745 kubelet[2749]: E1028 13:22:51.397690 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:51.398027 kubelet[2749]: E1028 13:22:51.397749 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:51.398027 kubelet[2749]: E1028 13:22:51.397888 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5l5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8554b7fc49-wxqqx_calico-apiserver(192b16a9-1a1e-4db5-aed4-a301ae461858): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:51.399150 kubelet[2749]: E1028 13:22:51.399096 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:22:52.001703 containerd[1612]: time="2025-10-28T13:22:52.001640500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 13:22:52.376370 containerd[1612]: time="2025-10-28T13:22:52.376233022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:52.377496 containerd[1612]: time="2025-10-28T13:22:52.377431115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 13:22:52.377496 containerd[1612]: time="2025-10-28T13:22:52.377486020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:52.377691 kubelet[2749]: E1028 13:22:52.377647 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:52.377775 kubelet[2749]: E1028 13:22:52.377692 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:52.377834 kubelet[2749]: E1028 13:22:52.377792 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a5fa0fa36934229a92f964f9d8c2a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk7sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b78c7cbf-jfw2j_calico-system(c96b0190-3699-44b4-be4c-b9b392bdd84b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:52.379636 containerd[1612]: time="2025-10-28T13:22:52.379605470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 13:22:52.749906 containerd[1612]: time="2025-10-28T13:22:52.749849825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:52.751139 containerd[1612]: time="2025-10-28T13:22:52.751099598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 13:22:52.751220 containerd[1612]: time="2025-10-28T13:22:52.751130607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:52.751318 kubelet[2749]: E1028 13:22:52.751277 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:52.751552 kubelet[2749]: E1028 13:22:52.751323 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:52.751552 kubelet[2749]: E1028 13:22:52.751424 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk7sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b78c7cbf-jfw2j_calico-system(c96b0190-3699-44b4-be4c-b9b392bdd84b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:52.752602 kubelet[2749]: E1028 13:22:52.752578 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b78c7cbf-jfw2j" podUID="c96b0190-3699-44b4-be4c-b9b392bdd84b" Oct 28 13:22:53.318473 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:33800.service - OpenSSH per-connection server daemon (10.0.0.1:33800). Oct 28 13:22:53.377772 sshd[5200]: Accepted publickey for core from 10.0.0.1 port 33800 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:53.379408 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:53.386792 systemd-logind[1598]: New session 22 of user core. Oct 28 13:22:53.391434 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 13:22:53.520760 sshd[5203]: Connection closed by 10.0.0.1 port 33800 Oct 28 13:22:53.521106 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:53.525867 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:33800.service: Deactivated successfully. Oct 28 13:22:53.527899 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 13:22:53.528758 systemd-logind[1598]: Session 22 logged out. Waiting for processes to exit. Oct 28 13:22:53.529948 systemd-logind[1598]: Removed session 22. Oct 28 13:22:54.002173 containerd[1612]: time="2025-10-28T13:22:54.002105147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:54.372540 containerd[1612]: time="2025-10-28T13:22:54.372423953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:54.373677 containerd[1612]: time="2025-10-28T13:22:54.373646582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:54.373727 containerd[1612]: time="2025-10-28T13:22:54.373688011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:54.373871 kubelet[2749]: E1028 13:22:54.373826 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:54.374193 kubelet[2749]: E1028 13:22:54.373879 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:54.374193 kubelet[2749]: E1028 13:22:54.373995 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5rvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8554b7fc49-wqtwg_calico-apiserver(9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:54.376028 kubelet[2749]: E1028 13:22:54.376002 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:22:55.002029 containerd[1612]: time="2025-10-28T13:22:55.001971944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 13:22:55.450401 containerd[1612]: time="2025-10-28T13:22:55.450346476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:55.451609 containerd[1612]: time="2025-10-28T13:22:55.451573363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 13:22:55.451683 containerd[1612]: time="2025-10-28T13:22:55.451643917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:55.451802 kubelet[2749]: E1028 13:22:55.451764 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:55.452038 kubelet[2749]: E1028 13:22:55.451808 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:55.452038 kubelet[2749]: E1028 13:22:55.451925 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n225x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:55.453989 containerd[1612]: time="2025-10-28T13:22:55.453953113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 13:22:55.863209 containerd[1612]: time="2025-10-28T13:22:55.863092467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:55.864422 containerd[1612]: time="2025-10-28T13:22:55.864363957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 13:22:55.864422 containerd[1612]: time="2025-10-28T13:22:55.864395727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:55.864602 kubelet[2749]: E1028 13:22:55.864529 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:55.864602 kubelet[2749]: E1028 13:22:55.864569 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:55.864710 kubelet[2749]: E1028 13:22:55.864672 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n225x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4cbn9_calico-system(f768eb5b-b675-4026-8f12-83b3103b89d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:55.865917 kubelet[2749]: E1028 13:22:55.865826 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:22:58.532836 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:33808.service - OpenSSH per-connection server daemon (10.0.0.1:33808). Oct 28 13:22:58.588927 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 33808 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:22:58.590121 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:58.594237 systemd-logind[1598]: New session 23 of user core. Oct 28 13:22:58.603174 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 13:22:58.714195 sshd[5222]: Connection closed by 10.0.0.1 port 33808 Oct 28 13:22:58.714500 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:58.718604 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:33808.service: Deactivated successfully. Oct 28 13:22:58.720646 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 13:22:58.722232 systemd-logind[1598]: Session 23 logged out. Waiting for processes to exit. Oct 28 13:22:58.723518 systemd-logind[1598]: Removed session 23. Oct 28 13:22:59.000660 kubelet[2749]: E1028 13:22:59.000623 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:59.001850 containerd[1612]: time="2025-10-28T13:22:59.001819285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 13:22:59.372415 containerd[1612]: time="2025-10-28T13:22:59.372183109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:59.373456 containerd[1612]: time="2025-10-28T13:22:59.373405513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 13:22:59.373525 containerd[1612]: time="2025-10-28T13:22:59.373446781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:59.373679 kubelet[2749]: E1028 13:22:59.373647 2749 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:59.373736 kubelet[2749]: E1028 13:22:59.373693 2749 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:59.373867 kubelet[2749]: E1028 13:22:59.373815 2749 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g7wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mtvt9_calico-system(25e960ce-bdec-4eed-a381-0e4a3ff2145d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:59.375170 kubelet[2749]: E1028 13:22:59.375124 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mtvt9" podUID="25e960ce-bdec-4eed-a381-0e4a3ff2145d" Oct 28 13:23:00.000967 kubelet[2749]: E1028 13:23:00.000928 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:23:03.002528 kubelet[2749]: E1028 13:23:03.002466 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dd766f59-xz29t" podUID="0acafd4c-9dce-4e7d-bc78-4db28e85758d" Oct 28 13:23:03.727859 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:39810.service - OpenSSH per-connection server daemon (10.0.0.1:39810). Oct 28 13:23:03.794820 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:23:03.796741 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:23:03.801540 systemd-logind[1598]: New session 24 of user core. Oct 28 13:23:03.807287 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 28 13:23:03.930164 sshd[5241]: Connection closed by 10.0.0.1 port 39810 Oct 28 13:23:03.930475 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Oct 28 13:23:03.934993 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:39810.service: Deactivated successfully. Oct 28 13:23:03.937163 systemd[1]: session-24.scope: Deactivated successfully. Oct 28 13:23:03.938037 systemd-logind[1598]: Session 24 logged out. Waiting for processes to exit. Oct 28 13:23:03.939779 systemd-logind[1598]: Removed session 24. Oct 28 13:23:04.003102 kubelet[2749]: E1028 13:23:04.002927 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b78c7cbf-jfw2j" podUID="c96b0190-3699-44b4-be4c-b9b392bdd84b" Oct 28 13:23:06.002214 kubelet[2749]: E1028 13:23:06.002150 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wxqqx" podUID="192b16a9-1a1e-4db5-aed4-a301ae461858" Oct 28 13:23:07.002281 kubelet[2749]: E1028 13:23:07.002216 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4cbn9" podUID="f768eb5b-b675-4026-8f12-83b3103b89d1" Oct 28 13:23:08.001607 kubelet[2749]: E1028 13:23:08.001325 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:23:08.001755 kubelet[2749]: E1028 13:23:08.001724 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8554b7fc49-wqtwg" podUID="9a74fe9b-d7fb-420f-ad6d-4d56d0d183ba" Oct 28 13:23:08.950402 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:39822.service - OpenSSH per-connection server daemon (10.0.0.1:39822). Oct 28 13:23:08.990366 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 39822 ssh2: RSA SHA256:6h78p3P1/6ox1ay4Hrh5w0zDTKNFx903s2eJY/1WKDs Oct 28 13:23:08.991701 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:23:08.997776 systemd-logind[1598]: New session 25 of user core. Oct 28 13:23:09.013163 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 28 13:23:09.333145 sshd[5258]: Connection closed by 10.0.0.1 port 39822 Oct 28 13:23:09.333345 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Oct 28 13:23:09.337071 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:39822.service: Deactivated successfully. Oct 28 13:23:09.338909 systemd[1]: session-25.scope: Deactivated successfully. Oct 28 13:23:09.339646 systemd-logind[1598]: Session 25 logged out. Waiting for processes to exit. Oct 28 13:23:09.340648 systemd-logind[1598]: Removed session 25.